id
stringlengths
10
10
title
stringlengths
26
192
abstract
stringlengths
172
1.92k
authors
stringlengths
7
591
published_date
stringlengths
20
20
link
stringlengths
33
33
markdown
stringlengths
269
344k
2301.04970
Hierarchical Dynamic Masks for Visual Explanation of Neural Networks
Saliency methods generating visual explanatory maps representing the importance of image pixels for model classification is a popular technique for explaining neural network decisions. Hierarchical dynamic masks (HDM), a novel explanatory maps generation method, is proposed in this paper to enhance the granularity and comprehensiveness of saliency maps. First, we suggest the dynamic masks (DM), which enables multiple small-sized benchmark mask vectors to roughly learn the critical information in the image through an optimization method. Then the benchmark mask vectors guide the learning of large-sized auxiliary mask vectors so that their superimposed mask can accurately learn fine-grained pixel importance information and reduce the sensitivity to adversarial perturbations. In addition, we construct the HDM by concatenating DM modules. These DM modules are used to find and fuse the regions of interest in the remaining neural network classification decisions in the mask image in a learning-based way. Since HDM forces DM to perform importance analysis in different areas, it makes the fused saliency map more comprehensive. The proposed method outperformed previous approaches significantly in terms of recognition and localization capabilities when tested on natural and medical datasets.
Yitao Peng, Longzhen Yang, Yihang Liu, Lianghua He
2023-01-12T12:24:49Z
http://arxiv.org/abs/2301.04970v1
# Hierarchical Dynamic Masks for Visual Explanation of Neural Networks ###### Abstract Saliency methods generating visual explanatory maps representing the importance of image pixels for model classification is a popular technique for explaining neural network decisions. Hierarchical dynamic masks (HDM), a novel explanatory maps generation method, is proposed in this paper to enhance the granularity and comprehensiveness of saliency maps. First, we suggest the dynamic masks (DM), which enables multiple small-sized benchmark mask vectors to roughly learn the critical information in the image through an optimization method. Then the benchmark mask vectors guide the learning of large-sized auxiliary mask vectors so that their superimposed mask can accurately learn fine-grained pixel importance information and reduce the sensitivity to adversarial perturbations. In addition, we construct the HDM by concatenating DM modules. These DM modules are used to find and fuse the regions of interest in the remaining neural network classification decisions in the mask image in a learning-based way. Since HDM forces DM to perform importance analysis in different areas, it makes the fused saliency map more comprehensive. The proposed method outperformed previous approaches significantly in terms of recognition and localization capabilities when tested on natural and medical datasets. Introduction Neural networks [8, 10, 11] have made remarkable achievements in areas such as image recognition [22, 3]. So people began to think about the logic of neural network decision-making. Many methods for explaining neural networks have been proposed [14]. Among them, the method of generating saliency maps to point out the image regions that neural networks pay attention to when making decisions has been widely studied. The generation of saliency maps [13] is divided into three mainstream methods: perturbation-based methods [15, 25], gradient-based methods [21], and CAM-based methods [12, 9]. Gradient-based methods [21, 18] propagate the gradient of the target class to the input layer via backpropagation to generate regions that have a large influence on the prediction. But the explanations provided by these methods are less reliable and sensitive to adversarial perturbations [14]. The method based on the input perturbation [4, 15] monitors the change of the prediction probability of a specific class while occluding different image regions, and generates a saliency map describing the importance of the network prediction probability by setting specific perturbation rules and discriminant methods. CAM-based method [26] proposes class activation maps (CAM), which provide visual explanations based on linearly combining activation maps of convolutional neural networks. [1] have removed the requirements of the CAM-based methods for the network structure and improved the accuracy of positioning. In this paper, in order to alleviate the problem that previous saliency map methods generate interpretation maps with rough localization and cannot express the importance of pixels in a fine-grained manner, we propose DM. It trains mask vectors of different sizes through an optimization-based approach. A single element of a small-sized mask vector corresponds to a large receptive field, and there is less adversarial effect during learning. Large size mask vectors have small receptive fields and can extract fine-grained information from images. In order to combine the advantages of both, we let the small-size mask vector guide the learning of the large-size mask vector, reducing the mask vector's adversarial sensitivity while improving its ability to mine fine-grained information. This enables the DM's final fused masks to generate fine-grained explanatory map. Additionally, we consider how to more completely capture regions of interest for neural network classification. An image may have multiple regions that contribute to the network's decision, but previous methods often only find one main decision region and ignore others. To avoid this problem, we propose HDM that exploits DM hierarchically. It masks the salient regions found by the DM on the image, re-inputs the masked image into the next DM module to force it to find the regions in the image that the network cares about in the remaining images, and then fuses the above regions according to their importance through a learning-based method. HDM can not only combine the fine search ability of DM, but also conduct a comprehensive search for the decision-making area of the entire image. This makes the saliency maps it generates not only detailed but also comprehensive. The key contributions of our work are as follows: * We propose a learning-assisted module DM using vectors of various sizes, which can analyze the importance of image pixels to model classification in detail. * We propose a visual interpretation method HDM that leverages DM hierarchically to generate fine-grained and comprehensive saliency maps. * We conduct multiple experiments on natural and medical datasets and verify that our method achieves state-of-the-art in recognition and localization capabilities. ## 2 Related work Existing saliency methods can be divided into three categories, and we briefly introduce these methods in this section. The first is a perturbation-based method. [4] use the optimization method to find the areas in the image that are helpful for the network decision-making, and cover the areas in the picture that are not related to the classification. [15] detects the importance of various regions in the image in the decision of the network by using a randomly masked version of the input image. The second is a gradient-based method. Gradient [19] directly computes the derivative of a class score with respect to an input image using a first-order Taylor expansion. It captures local sensitivity by representing the change at Figure 1: Overall architecture of HDM. It is formed by stacking several DM blocks. The original image is input into the DM block to generate a mask and mask image, and then the mask image is continuously input into the next DM block to generate the next mask and mask image. Several masks are obtained by repeating the above operations. All the masks are mixed to generate the final mixed mask through the learning-based timing combination method, and the mixed mask image generated by the dot multiplication of the mixed mask and the original image is trained with the original image to calculate the activation consistency to form the final mixed mask. each input location through gradients. [18] proposed DeepLIFT to decompose the neural network's output prediction. Further, [21] proposed Integrated Gradients, which uses small changes in the feature space between the interpolated image and the input image to measure the correlation between feature changes and model predictions. The third is a CAM-based approach.Grad-CAM [17] uses the gradient information flowing into the last convolutional layer of a convolutional neural networks to assign importance values to each neuron. In order to increase the positioning accuracy of CAM, [1] proposed Grad-CAM++, which added an additional weight to weigh the elements of the gradient map. FullGrad [20] more fully explains model behavior by aggregating full gradient components. [24] proposed Score-CAM to solve the problem of easily finding false confidence samples. Ablation-CAM [16] was present to explore the strength of each factor's contribution to overall model. Eigen-CAM [12] makes CAM generation easier and more intuitive. XGrad-CAM[6] introduces sensitivity and conservation axioms for CAM methods. To generate a more comprehensive saliency map, Layer-CAM [9] considers the role of each spatial location in the feature map using element-level weights. ## 3 Methodology The overall architecture of HDM and generation method are described in Section 3.1. In addition, Section 3.2 introduces DM to fine study the importance of images to network. ### Hierarchical Masks Generation We propose to use DM hierarchical timing to find the regions of the image that neural networks focus on when making decisions. After the current DM finds a decision-making area, it masks this area in the image, and then inputs the masked image to the next DM module to analyze whether DM can find other areas that are helpful for neural network decision-making. Since the decision-making area searched by the previous DM module has been shielded, the next DM module must search in the unshielded area. Through hierarchical method, which forces each DM module to search different regions, it is possible to comprehensively learn the region information in the image that the network pays attention to when making decisions. Figure 1 is the overall architecture of HDM, which consists of many DM modules. DM precisely generates the areas of interest to the neural network in the input image. HDM generates comprehensive and refined saliency maps by mixing masks generated by individual DM modules through a learning-based timing combination. Let the input image be \(X\in R^{H\times W\times C}\), the DM module is denoted as \(Q(\cdot)\). After inputting image \(X_{i-1}\) to DM, DM can calculate the mask \(M_{i}\) (where \(M_{i}\in R^{H\times W\times 1}\)) corresponding to the most critical region for decision-making in image \(X_{i}\). We have the following iteration formula, generating \(S\) decision masks, \(X_{0}=X\), \(i\in\{1,2,...,S\}\). \[M_{i}=Q(X_{i-1}) \tag{1}\] \[X_{i}=(1-N(\sum_{j=1}^{i}M_{j}))X_{0} \tag{2}\] where \(N(x)=\frac{x-min(x)}{max(x)-min(x)}\) is a normalization function. The mix mask \(M_{h}\) is generated by mixing \(S\) masks \(M_{j}\) (\(j\in\{1,2,...,S\}\)) through the following timing combination. \[M_{h}=\frac{\sum_{j=1}^{S}w_{j}M_{j}}{\sum_{j=1}^{S}w_{j}},w_{j}=\sum_{k=j}^{S }v_{j}^{2} \tag{3}\] Figure 2: The flow of a DM. Above the line, mask vectors of different sizes are trained by constraining the original image and activation image to have consistent activations on nodes of the neural network. The image below the line shows that the upsampled learned mask vectors are stacked to produce an overlay image. The mask image is obtained by multiplying the overlay mask with the original image. The overlay mask uses JET colormap to generate heatmap, which is added to the original image to get heatmap image. Note: \(J(M,X)=||f_{p}(MX)-f_{p}(X)||_{2}^{2}\), which represents the consistent squared loss generated by \(MX\) and \(X\) input neural network \(f\) at position \(p\). In this paper, we let the position \(p\) be the node corresponding to the category predicted by the neural network. \(L_{R}(M)=\sum_{u=1}^{H}\sum_{v=1}^{W}\frac{|M_{huv}|}{|HW|}\), which is the regularized value corresponding to mask \(M\). \[L(M,X)=J(M,X)+\lambda L_{R}(M) \tag{4}\] where \(\lambda\) is a regularization factor. We train the weights of \(S\) masks by optimizing the following loss in Equation (5) to obtain the mix mask \(M_{h}\) according to Equation (3). \[v_{1},v_{2},...,v_{S}\leftarrow\min_{v_{1},v_{2},...,v_{S}}L(M_{h},X_{0}) \tag{5}\] Since HDM searches the remaining areas that are conducive to decision-making in the current image hierarchically according to time sequence, \(M_{j}\) is no less important than \(M_{j+1}\). Therefor, we constrain \(w_{j}\geq w_{j+1}\) by Equation (3), \(j\in\{1,2,...,S-1\}\). ### Dynamic Masks Learning DM learns the mask vectors by optimizing the activation consistency between the original image and the mask image and the regularization term of the mask vectors. The higher the mask value of the region that is more important to the classification decision of the neural network, the higher the attention of the neural network to this region (refer to supplementary material for explanation). We first set a small-sized benchmark mask vector to roughly learn the attention of the neural network to each area in the image. The size of the mask vector is inversely proportional to its receptive field, and each element of the small-sized mask vector corresponds to a large receptive field in the image, so that the vector elements can accurately learn the importance of each region in the image. Benchmark mask vectors \(\{d_{i}\}_{i=1}^{D}\), \(d_{i}\in R^{a_{i}\times b_{i}\times 1}\), \(d_{i}\) are initialized to \(\tau\). For any \(i,j\in\{1,2,...,D\}\), if \(i\neq j\) then \(a_{i}\neq a_{j}\) or \(b_{i}\neq b_{j}\). Upsample function \(g(\cdot,h,w)\) can upsample the original vector to a vector of length \(h\) and width \(w\). For example, \(x\in R^{h_{1}\times w_{1}\times 1}\), and \(g(x,h_{2},w_{2})\in R^{h_{2}\times w_{2}\times 1}\). As show in Figure 2, we train \(\{d_{i}\}_{i=1}^{D}\) by the activation consistency between the mask image and the original image. Mathematically, we optimize the following loss function. \[\begin{split} L(g(d_{i},H,W),X)&=J(g(d_{i},H,W),X)\\ &+\eta_{i}L_{R}(g(d_{i},H,W))\end{split} \tag{6}\] where \(\eta_{i}\) is a regularization factor. To enable the mask to analyze the importance of region of the image at a fine-grained level, we iteratively generate auxiliary mask vectors \(c_{j}^{k}(i)\) on the benchmark mask vectors. \(c_{j}^{k}(i)\) is defined as follows: \[c_{j}^{k}(i)=g(d_{i},t_{j}^{k}a_{i},t_{j}^{k}b_{i}) \tag{7}\] where \(\{t_{j}\}_{j=1}^{T}\) are positive integer hyperparameters, and \(k\in\{0,1,....,K_{j}^{i}\}\), \(K_{j}^{i}=min\{\left\lfloor\frac{lnH-lna_{i}}{lnt_{j}}\right\rfloor,\left\lfloor \frac{lnW-lnb_{i}}{lnt_{j}}\right\rfloor\}\), \(\lfloor\cdot\rfloor\) is the floor function. As shown in Figure 3, the benchmark mask vector is used as the initial guidance vector, and the auxiliary mask vector of the upper level guides the auxiliary mask vector of the next level to iteratively learn auxiliary vector sets \(\{c_{j}^{k}(i)|i\in\{1,...,D\},j\in\{1,...,T\},k\in\{0,...,K_{j}^{i}\}\}\). The loss function of \(c_{j}^{k}(i)\) is as the following Equation (8). \[\begin{split}& L(g(c_{j}^{k}(i)g(c_{j}^{k-1}(i),t_{j}^{k}a_{i},t_{j}^ {k}b_{i}),H,W),X)\\ &=J(g(c_{j}^{k}(i)g(c_{j}^{k-1}(i),t_{j}^{k}a_{i},t_{j}^{k}b_{i}),H,W),X)\\ &+\eta_{i}^{(k,j)}L_{R}(g(c_{j}^{k}(i),H,W))\end{split} \tag{8}\] As shown below the line in Figure 2, \(M_{c}\) is obtained by stacking the auxiliary masks, and the redundant information of \(M_{c}\) is removed to obtain the overlay mask \(M_{b}\). \(M_{b}\) is the final output result of DM, which can fine-grainedly describe the importance of image regions in neural network decisions. \[M_{c}=\sum_{i=1}^{D}\sum_{j=1}^{T}\sum_{k=0}^{K_{j}^{i}}c_{j}^{k}(i) \tag{9}\] \[M_{b}=N((M_{c}-\gamma)\{M_{c}\geq\gamma\}) \tag{10}\] where \(\gamma\) is the threshold, and \(\{\cdot\}\) represents a truth-valued function, which is 1 if true; otherwise, it equals 0. \(N(\cdot)\) is a normalization function. Figure 3: An example of training an auxiliary mask vector. The original image size is \(224\times 224\). A feature block of the \(2\times 2\) and \(4\times 4\) mask vector corresponds to the \(112\times 112\) and \(56\times 56\) receptive field of the original image, respectively. The \(c_{j}^{k+1}(i)\) to be trained is multiplied element-wise by the upsampled \(c_{j}^{k}(i)\) that has been trained, and then the multiplication result is upsampled and multiplied by the original image to generate the activation image. \(c_{j}^{k+1}(i)\) is trained by optimizing the consistency activation of the activation and original image. ## 4 Experiments First, the dataset, baseline, and our experimental setup are presented in section 4.1 and section 4.2 for reproducibility. Second, we present the visual interpretation results we generate in section 4.3, qualitatively evaluating the performance of our method compared to other previous methods. Furthermore, section 4.4 visualizes the saliency map generated by the input image in each stage of HDM, showing the flow of HDM for inference. In addition, sections 4.5 present the results of quantitatively evaluating the fidelity of the interpretation of saliency maps. The quality of the saliency maps are measured by localization ability as described in section 4.6. Finally, ablation experiments are performed in section 4.7. ### Datasets and Baselines Experiments were conducted on two public image recognition datasets, including one natural (CUB-200-2011 [23]) and one medical (iChallenge-PM [5]) image datasets. We compare 13 state-of-the-art saliency map generation methods. Gradient-based methods: Gradient [19], DeepLIFT [18], and Integrated [21]. Perturbation-based methods: RISE [15], and Mask [4]. CAM-based methods: Grad-CAM [17], Grad-CAM++ [1], Score-CAM [24], XGrad-CAM [6], FullGrad [20], Eigen-CAM [12], Ablation-CAM [16], and Layer-CAM [9]. The saliency maps generated by the above-mentioned methods are taken as baselines. Figure 4: Interpretation results for the ResNet50 network using different techniques. The saliency maps generated by each method are normalized to the range [0,1] and visualized using the JET colormap. The saliency map from blue to red indicates a gradual increase in activation probability. Note that the first three rows are examples where the prediction is correct, while the last three rows are wrong. ### Experimental Details In this work, ResNet50 [7] was chosen as the neural network for saliency maps interpretation. All methods use a ResNet50 model with the same parameters and fixed for visual evaluation. The input image is resized to \(224\times 224\times 3\), and then normalized using mean vector (0.485, 0.456, 0.406) and variance vector (0.229, 0.224, 0.225). Our training set contains only images and their class labels. ResNet50 is pre-trained on ImageNet [2] before being trained on the natural and medical images datasets. Each image is trained for 800 epochs with a learning rate set to \(1e-2\). We set \(\lambda=1e-4\), \(\eta_{i}=\eta_{i}^{(k,j)}=100\). In the CUB-200-2011, we set \(d_{i}=i+5\), \(\{d_{i}\}_{i=1}^{6}\), \(\{t_{j}\}_{j=1}^{3}=\{2,3,5\}\); \(\gamma\) is set to the pixel value of the top 25% of the pixel value of the salient image; HDM consists of 3 DM modules. In the iChallenge-PM, we set \(d_{i}=i+5\), \(\{d_{i}\}_{i=1}^{4}\), \(\{t_{j}\}_{j=1}^{2}=\{2,3\}\); \(\gamma\) is set to the pixel value of the top 30% of the pixel value of the salient image; HDM consists of 1 DM modules. The learning-based mix mask iteration 400 epochs, and its learning rate is \(1e-1\). ### Qualitative Evaluations Experiments are designed to qualitatively compare the performance of saliency maps generated by different methods. As shown in Figure 4, saliency maps Figure 5: The heatmap generated by the whole process of HDM. The first column are the input images; the second, third, and fourth column images are the heatmap images generated after passing through the DM module for the first, second, and third times respectively; the fifth column of images is the heatmap image generated by mixing the second, third, and fourth columns of heatmaps through a learning-based method, which is also the output of HDM. point out regions of interest that networks interpret when making decisions, and we compare 13 state-of-the-art saliency map generation methods. Interpretation results for six different images are reported in Figure 4. In each row, the leftmost image is the original image, and the second and third rows are the saliency maps generated by our proposed HDM and DM methods, respectively. The subsequent columns represent the results obtained using different methods. The color of the saliency maps from red to blue represents indicates the gradual decrease in the probability of the network's attention to the region. As shown in Figure 4, the saliency maps generated by CAM-based methods usually only point out the areas of the image that a network pays attention to when making decisions. The form of this region is almost always with a center as the highest activation, diffuses outward and gradually weakens irregularly, so the generated The saliency map has a single structure, lacks integrity, and is not easy for humans to understand. In contrast, the saliency map generated by HDM can discover multiple decision-making regions and integrate them completely. It finely includes the body regions of the bird used for decision making, outlining the silhouette of the bird. The HDM method can completely and accurately generate the areas of concern when making network decisions, which is more comprehensive and easy to understand than the CAM-based method. The saliency map generated based on the gradient method can roughly describe the contours of the objects concerned by the network decision-making, but the pixels of the saliency map are discrete and sharp, and it is difficult for people to observe which are the key decision-making areas. By comparison, we can see that the saliency map generated by HDM can better describe the outline of the object while retaining the smooth change of activation, which makes it easier for people to observe the importance of each region. The saliency map generated by the perturbation-based method can mark multiple regions for decision-making, but such saliency maps cover too large areas and have more noise. In contrast, HDM can cover multiple decision regions while paying little attention to regions outside the decision region (bird body), generating saliency maps with little noise. Overall, our method can generate refined and complete saliency maps, which are less noisy and easy to understand. ### Processes Visualization As shown in Figure 5, we show all saliency maps generated by HDM during inference on images from the CUB-200-2011 dataset. When researching on the CUB-200-2011 dataset, we set HDM to consist of 3 DM modules. The first column in Figure 5 is the original input image. The second, third, and fourth columns in Figure 5 represent the saliency images of the current most-attended regions of the neural network found after inputting the original image or the mask image into the DM module in the three stages, respectively. It can be seen from Figure 5 that each DM module can accurately and fine-grainedly find the remaining regions in the current picture that are helpful for neural network decision-making, and the regions of interest found in each stage are almost disjoint. This is in line with HDM's hierarchical search strategy. The mixed heatmap images in the last column are the result of mixing the heatmap images generated by the first three DM modules via a learning-based approach, which makes the final generated saliency maps comprehensive. It can be seen from Figure 5 that the final hybrid map generated by HDM is not only comprehensive, but also fine-grained. ### Faithfulness Evaluation via Image Recognition The experiment evaluates the fidelity of the interpretation of the saliency map generated by the HDM in the object recognition task, and the average drop and average increase [1] are used as evaluation indicators. This experiment sets the activation value of a specific percentage of all pixels in the saliency map as the threshold value, mutes the pixels below the threshold value, and retains the pixel value above the threshold value to generate an interpretation map (in the experiment, set 70% and 80% of the pixels are muted). The average drop is expressed as \(\sum_{i=1}^{N}\frac{max(0,Y_{i}-O_{i})}{Y_{i}}\times 100\), and the average increase is expressed as \(\sum_{i=1}^{N}\frac{Sign(Y_{i}<O_{i})}{N}\), where \(Y_{i}^{c}\) is the predicted score of image \(i\) in category \(c\), and \(O_{i}^{c}\) is the predicted score of category \(c\) with the explanatory map as input. \(Sign(\cdot)\) is an indicator function that returns 1 if the input is True. In the experiment, 10 images are randomly selected for each category on CUB-200-2011, and a total of 2000 images are formed for testing. The results are reported in Table 1. When the threshold is set to occlude 70% and 80% of \begin{table} \begin{tabular}{l c c c c} \hline Evaluation & Average Drop & \multicolumn{2}{c}{Average Increase} \\ \hline Percentile & 80\% & 70\% & 80\% & 70\% \\ \hline Mask & 39.1\% & 24.1\% & 15.3\% & 22.3\% \\ RISE & 31.2\% & 17.4\% & 24.2\% & 35.4\% \\ Gradient & 97.8\% & 97.4\% & 0.4\% & 0.8\% \\ DeepLIFT & 97.7\% & 97.5\% & 0.3\% & 0.6\% \\ Integrated & 98.4\% & 97.8\% & 0.3\% & 0.5\% \\ Grad-CAM & 31.5\% & 17.0\% & 21.3\% & 35.7\% \\ Grad-CAM++ & 35.2\% & 19.1\% & 18.2\% & 32.2\% \\ Score-CAM & 31.2\% & 18.2\% & 19.7\% & 35.3\% \\ Ablation-CAM & 33.7\% & 19.7\% & 17.3\% & 34.4\% \\ XGrad-CAM & 33.9\% & 18.7\% & 18.2\% & 36.6\% \\ Eigen-CAM & 38.5\% & 22.9\% & 17.1\% & 31.8\% \\ Layer-CAM & 36.6\% & 19.3\% & 16.3\% & 29.6\% \\ FullGrad & 35.5\% & 19.5\% & 20.1\% & 30.9\% \\ DM(ours) & 32.6\% & 15.7\% & 26.4\% & 36.9\% \\ HDM(ours) & **30.1\%** & **13.3\%** & **26.7\%** & **38.8\%** \\ \hline \end{tabular} \end{table} Table 1: Evaluated results on average drop (lower is better) and average increase (higher is better). the pixels, HDM achieves 13.3% and 30.1% average drop and 38.8% and 26.7% average increase, respectively. HDM outperforms other saliency methods. This task shows that HDM can find the most distinguishable regions of the target object as much as possible, and can eliminate as much as possible the regions irrelevant to the target object distinction. To further evaluate the discriminative ability of HDM, we test on the insertion and deletion metrics proposed in [15]. Deletion and insertion scores are the areas under the probability curves described by the predicted probability results for deleted or inserted images generated by deleting and inserting pixels from the original image in descending order of saliency map activation values, \begin{table} \begin{tabular}{l c c} \hline \hline Evaluation & Deletion Score & Insertion Score \\ \hline Mask & 0.0516 & 0.7579 \\ Grad-CAM & 0.0537 & 0.7939 \\ Grad-CAM++ & 0.0578 & 0.7937 \\ Score-CAM & 0.0576 & 0.8076 \\ Ablation-CAM & 0.0643 & 0.7724 \\ XGrad-CAM & 0.0582 & 0.7983 \\ Eigen-CAM & 0.0763 & 0.7751 \\ Layer-CAM & 0.0601 & 0.7954 \\ FullGrad & 0.0537 & 0.7958 \\ DM(ours) & 0.0461 & 0.8171 \\ HDM(ours) & **0.0447** & **0.8189** \\ \hline \hline \end{tabular} \end{table} Table 2: Evaluated results on delecion (lower is better) and insertion (higher is better) scores. Figure 6: Deletion and Insertion curves of the above methods. respectively. This experiment adopts the setting in [24], the step size is 1%, and the pixel value is set to 0 or 1 to remove or introduce pixels. We compare the performance of the methods in Table 2 on the above-mentioned evaluation metrics. Here we do not compare other perturbation-based methods, since they can produce adversarial effects [15] that interfere with these evaluation metric performance. As shown in Figure 6, the deletion and insertion curves produced by each model are displayed. The HDM method achieves the largest AUC and the steepest curve, which shows that the HDM saliency map can best find the area in the current image that has the greatest influence on the network decision. Table 2 reports the average results for the above-mentioned 2000 images, and our method achieves state-of-the-art on both metrics compared to the respective previous methods. ### Localization Evaluation In this section, the localization ability is employed to measure the quality of the generated saliency maps. In the same way as in [24], calculate how much energy of the saliency map falls into the segmented foreground region of the target object. Binarize the segmented foreground image given in the test set, assign the foreground area as 1, and assign the background area as 0, multiply the saliency map calculated by each method with the binarized image point by point and sum how much energy is obtained in the target foreground. The evaluation metric Proportion is expressed as \(\frac{\sum L_{(i,j)\in bbox}^{c}}{\sum L_{(i,j)\in bbox}^{c}+\sum L_{(i,j)\notin bbox}^{c}}\). We test the localization ability on the CUB-200-2011 and iChallenge-PM datasets, set \begin{table} \begin{tabular}{l c c} \hline Evaluation & \multicolumn{2}{c}{Proportion} \\ \hline Dataset & CUB-200-2011 & iChallenge-PM \\ \hline Mask & 33.31\% & 27.31\% \\ RISE & 29.47\% & 14.33\% \\ Gradient & 31.58\% & 13.57\% \\ DeepLIFT & 29.86\% & 13.63\% \\ Integrated & 31.50\% & 13.59\% \\ Grad-CAM & 50.34\% & 20.84\% \\ Grad-CAM++ & 51.37\% & 17.36\% \\ Score-CAM & 54.52\% & 24.71\% \\ Ablation-CAM & 48.33\% & 18.14\% \\ XGrad-CAM & 48.92\% & 18.91\% \\ Eigen-CAM & 57.41\% & 21.64\% \\ Layer-CAM & 56.24\% & 20.30\% \\ FullGrad & 47.61\% & 17.94\% \\ DM(ours) & 53.81\% & 32.81\% \\ HDM(ours) & **57.97\%** & **32.81\%** \\ \hline \end{tabular} \end{table} Table 3: On CUB-200-2011 and iChallenge-PM datasets, the evaluation results of each method in proportion. ting the foreground region of the image segmentation label as the bbox region. In the test set of CUB-200-211, 10 images are randomly selected from each category, and a total of 2000 images are formed for experiments; in the test set of iChallenge-PM, 200 images are randomly selected as experiments. The saliency maps generated by HDM have 57.97% and 32.81% energy falling in the ground-truth foreground region of the target in CUB-200-2011 and iChallenge-PM, respectively. This verifies that HDM can effectively reduce noisy regions in saliency maps that are not relevant to decisions. Our method achieves state-of-the-art performance in both natural and medical image localization ability. As shown in Figure 7, we visualize the results of the saliency maps generated by the HDM on bird and fundus datasets, respectively. In the bird image, the decision-making region found in the saliency map of HDM wraps most of the Figure 7: Results of saliency maps generated by HDM on bird and fundus images. The first and fourth rows are the original images; the second and fourth rows are the mixture of the saliency map and the original image; the third and sixth rows are the ground-truth. bird's body, and carefully evaluates the importance of each position on the body. This verifies that HDM is capable of comprehensively locating multiple decision regions in natural images and fine-grained division of the importance of each region. In the fundus images, HDM can also accurately and completely locate pathological areas in the images. The above experiments show that HDM can achieve good localization performance in both natural and medical images. ### Ablation Study In this section, we compare the performance of saliency maps generated using the DM module alone and HDM consisting of multiple DM hierarchical timings. It can be seen from Table 1 and Table 2 that the saliency map generated by DM is superior to all previous methods in the evaluation index of recognition ability; Table 3 shows that the localization ability of DM on natural images is weaker than some saliency methods, and the localization ability on medical images reaches the best. From the above-mentioned tables, it can be seen that the hierarchical stacking DM method of HDM can improve the performance of the saliency map on the above indicators. Because DM only focuses on finding the area that a neural network pays attention to when making decisions, but ignores the fact that there may be multiple areas that have an impact on the classification of the neural network. HDM makes the saliency map more comprehensive by using DM multiple times to search for decision regions. Therefore, both the DM module and the method of hierarchically stacking the DM modules are useful and necessary. ## 5 Conclusion In this paper, we propose HDM to generate refined and comprehensive visual explanatory maps. The DM module is able to generate finely localized saliency maps using masks of various sizes that cooperate with each other. HDM analyzes the masked image hierarchically using the DM module to find areas of concern for multiple neural network decision-making classification decisions in the image, and combines them well into a comprehensive visual interpretation map through a learning-based fusion method. Our method achieves state-of-the-art performance in recognition and localization capabilities on natural images and medical images, and quantitative evaluation verifies that our method can generate more fine-grained and comprehensive saliency maps.
2307.11981
Collaborative Graph Neural Networks for Attributed Network Embedding
Graph neural networks (GNNs) have shown prominent performance on attributed network embedding. However, existing efforts mainly focus on exploiting network structures, while the exploitation of node attributes is rather limited as they only serve as node features at the initial layer. This simple strategy impedes the potential of node attributes in augmenting node connections, leading to limited receptive field for inactive nodes with few or even no neighbors. Furthermore, the training objectives (i.e., reconstructing network structures) of most GNNs also do not include node attributes, although studies have shown that reconstructing node attributes is beneficial. Thus, it is encouraging to deeply involve node attributes in the key components of GNNs, including graph convolution operations and training objectives. However, this is a nontrivial task since an appropriate way of integration is required to maintain the merits of GNNs. To bridge the gap, in this paper, we propose COllaborative graph Neural Networks--CONN, a tailored GNN architecture for attribute network embedding. It improves model capacity by 1) selectively diffusing messages from neighboring nodes and involved attribute categories, and 2) jointly reconstructing node-to-node and node-to-attribute-category interactions via cross-correlation. Experiments on real-world networks demonstrate that CONN excels state-of-the-art embedding algorithms with a great margin.
Qiaoyu Tan, Xin Zhang, Xiao Huang, Hao Chen, Jundong Li, Xia Hu
2023-07-22T04:52:27Z
http://arxiv.org/abs/2307.11981v1
# Collaborative Graph Neural Networks for ###### Abstract Graph neural networks (GNNs) have shown prominent performance on attributed network embedding. However, existing efforts mainly focus on exploiting network structures, while the exploitation of node attributes is rather limited as they only serve as node features at the initial layer. This simple strategy impedes the potential of node attributes in augmenting node connections, leading to limited receptive field for inactive nodes with few or even no neighbors. Furthermore, the training objectives (i.e., reconstructing network structures) of most GNNs also do not include node attributes, although studies have shown that reconstructing node attributes is beneficial. Thus, it is encouraging to deeply involve node attributes in the key components of GNNs, including graph convolution operations and training objectives. However, this is a nontrivial task since an appropriate way of integration is required to maintain the merits of GNNs. To bridge the gap, in this paper, we propose Collaborative graph Neural Networks-CONN, a tailored GNN architecture for attribute network embedding. It improves model capacity by 1) selectively diffusing messages from neighboring nodes and involved attribute categories, and 2) jointly reconstructing node-to-node and node-to-attribute-category interactions via cross-correlation. Experiments on real-world networks demonstrate that CONN excels state-of-the-art embedding algorithms with a great margin. Attributed Network Embedding, Graph Neural Networks, Collaborative Aggregation, Cross-Correlation ## 1 Introduction Attributed networks [1, 2, 3, 4] are ubiquitous in a myriad of real-world information systems, such as academic networks and social medial systems. Unlike plain networks in which only node-to-node interactions are available, each node in attributed network is associated with a rich set of attributes, describing its distinctive characteristics. For example, in social networks, users connect with others as friends, share opinions, and post comments as attributes. In academic citation networks, different articles are connected via citation links, and each article has substantial text information, such as an abstract sentence to describe its own topic. Several studies [5, 6] in social science have revealed that attributes of nodes can reflect and affect their community structures [7] in practice. Thus, it is encouraging and important to study attributed networks. To this end, attributed network embedding [1, 8, 9], targeting at leveraging both network proximity and node attribute affinity to learn low-dimensional node representations, has attracted great attention in recent years, and many efforts have been devoted from both academia and industry [10, 11, 12, 13, 14, 15, 16]. Among them, embedding paradigm based on graph neural networks (GNNs) [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30] has achieved remarkable success over a variety of downstream graph analytical tasks, including node classification [31, 32, 33], graph classification [34, 35], link prediction [36, 37, 38, 39], node clustering [40, 41], and anomaly detection [42, 43, 44, 45]. The design recipes for GNNs based methods include two major components: 1) a GNN encoder that takes node attributes and node-to-node interaction network as input and outputs low-dimensional node representations; 2) the training objective, which is derived to reconstruct the input data (e.g., network structure), so as to train the model unsupervised. Thanks to the critical message propagation mechanism in GNNs, the GNN encoder is naturally applicable for attributed networks. Therefore, existing GNNs-based embedding efforts mainly focus on advancing model performance with more expressive message passing schema, such as adaptively aggregating messages from neighbors via the attention layer [46, 47, 48] Despite their popularity and recent advances in refining GNNs architectures to effectively model the topological structures [49, 50, 51, 52, 53], little attention was paid to the node attributes. Prior GNNs studies focus on updating the representation of each node by aggregating the representations of itself and its neighbors [54] recursively. In this learning process, node attributes are merely employed as the representations of nodes in the initial layer [55]. They would be blocked from message propagation if the network structure is incomplete or missing, which is quite common in real-world applications where graphs exhibit long-tail node degree distribution [56, 57]. Besides, even when we design the training objectives of GNNs, the node attributes are seldom used. For instance, the common practice to train GNNs based embedding algorithm for attributed network embedding is to reconstruct the observed node interactions [15], by either employing a negative sampling based objective or directly recovering the whole input network structure [14]. As a summary, node attributes are not well exploited in the existing works. Recently, there has been a revolution to rethink the value of node attributes in random-walk based embedding approach [58, 59]. The core idea is to redesign the crucial component-random-walk in an attribute aware fashion. Specifically, ANRL [60] refines the conditional probability that estimates the propensity score between the anchor node and its context by utilizing node attributes. FeatWalk [59] conducts an attribute-aware joint random walk to increase the diversity of generated walks. According to their empirical experiments, both of them have shown significant performance gains compared with their vanilla counterparts. Nevertheless, they are tailored for random-walk based approaches and cannot be directly applied to GNN models with trivial efforts. Motivated by this, in this paper, we propose to explore whether node attributes can be effectively employed to advance the essential building blocks of GNN architectures (i.e., message aggregation mechanism and training objective). However, it is a non-trivial and challenging task to integrate node attributes into GNN architectures mainly because of two reasons: (i) GNNs already show promising results by integrating node attributes as the initial node representations. It is difficult to further incorporate node attributes into the key components of GNNs, while maintaining the existing benefits and prominent performance [54, 61]; (ii) real-world node attributes, such as comments of users, abstracts of papers, and descriptions of products, are distinct from the network topological structures and not in line with the graph convolutional operation in GNNs. Specifically, the values in node attributes are often multi-categorical or continuous variables, while these in the network structures are binary. They are not compatible with each other. Thus a tailored operation is required to learn from them jointly. For example, employing autoencoders as the training objective of GNNs would achieve sub-optimal performance [62, 14]. To address the aforementioned challenges, we propose a novel unsupervised representation learning model, dubbed **CO**llaborative Graph **N**eural **N**etwork (CONN). It aims to develop a tailored GNN architecture for attributed networks, such that node attributes can be explicitly fused into the message aggregation process as well as the training objective. Specifically, we aim to investigate two important research questions. (i) How to leverage node attributes to explicitly guide the message propagation of vanilla GNN, and conduct a collaborative aggregation mechanism? (ii) In the training objective, how to effectively model the heterogeneous interactions, so as to jointly reconstruct the network structure and node attributes? We summarize our major contributions as follows. * CONN, to leverage node attributes in the two aforementioned key components of vanilla GNNs. * By conducting a bipartite graph on node attributes, we develop a collaborative aggregation mechanism for node embedding. It not only helps to enrich or rebuild node connections through attribute category, but also provides a principled way to update node representation from both neighboring nodes and involved attribute categories. * Based on the node and attribute category representations, we design a novel cross-correlation layer to effectively model the complex node or node attribute category interactions. It highlights the similarity of two anchor nodes from their multi-granularity features and significantly boosts the reconstruction capability of vanilla GNNs. * We evaluate CONN on node classification and link prediction tasks. Empirical results on benchmark datasets show that CONN performs consistently better than other state-of-the-art embedding methods. Moreover, we also analyze the robustness and convergence speed of CONN in Section 5.8. ## 2 Related Work There are three types of related works, based on whether node attributes are modeled for network embedding or not. In this section, we briefly review some related works in these two fields. Please refer to [63, 64, 65, 66, 67] for comprehensive review. ### _Network embedding_ The first class of approach can be tracked back to traditional graph machine learning problem, which aims to learn node embedding while preserving the local manifold structure, such as the LPP [68] and Laplacian Eigenmaps [69]. Nevertheless, these methods suffer from scalability challenges to large-scale networks, due to the time-expensive eigendecomposition operation on the adjacency matrix, whose time complexity is \(O(n^{3})\) with \(n\) is the number of nodes. Inspired by the recent breakthrough of distributed representation learning [70] in natural language processing, a lot of scalable network embedding methods [58, 71, 72, 73, 74, 75, 76, 77] have been developed. For example, DeepWalk [58] and Node2vec [73] conduct truncated random walks on the network to generate node sequences, and then feed them into into the Skip-Gram algorithm [70] for node embedding. LINE [71] optimizes the first and second order neighborhood associations to learn representations. GraRep [72] extends LINE to capture high-order neighborhood relationships. SDNE [74] applies deep learning for node embedding and targets to capture the non-linear graph structure as well as preserve the global and local structures of the graph. [78] attempts to incorporate the community structure of the graph for representation learning. Some other studies aim to deal with large-scale graphs [79, 80, 81]. Despite their simplicity, the aforementioned methods may be limited in practice, since they cannot exploit side information, such as user profiles, worlds in posts, and contexts in photos on social media. ### _Attributed Network Embedding_ Different from traditional network embedding, attributed network embedding [82, 83, 84, 85, 86, 87, 88], which aims to learn node representations by leveraging both the network structure and node attributes, has attracted substantial attention in recent years. For instance, ANRL [60] incorporates the node attributes into the conditional probability function, which predicts the propensity scores between an anchor node and its context, of Skip-gram to capture structure correlation. MUSAE [13] advances Skip-gram model by considering multi-scale neighborhood relationships based on node attributes. FeatWalk [59] aims to conduct attribute-aware random walks to increase the diversity of generated walks, so as to boost the Skip-gram model. PANE [89] is another random walk-based method for scalable training. TADW [83] incorporates node attributes into DeepWalk under matrix factorization framework. PTE [90] employs different orders of world co-occurrence relationships and node label to generate predictive text representations. ProGAN [91] aims to preserve the underlying proximities in the hidden space based on a generative adversarial network. Although the aforementioned methods are capable of employing the network structure and node attributes for joint representation learning, they are limited in capturing the structure information. More recently, graph neural networks [23] based attributed network embedding has attracted increasing attention, due to its ability in capturing structure information, incorporating node attributes, and modeling non-linear relationships. As a pioneering GNN work, GCN [23] formally suggests to update node presentation by recursively aggregating representations from its adjacent neighbors, and achieves significant performance improvement on semi-supervised classification task. To follow up, GAE [14] makes the first effort to extend GCN to an autoencoder framework to learn node representations unsupervisedly. Meanwhile, GraphSage [15] improves the scalability of GCN by sampling and aggregating features from a node's local neighborhood. Their results show that GCN is naturally suitable for attributed networks, since it can directly utilize node attributes as initial node features in the first layer for training. Motivated by this, many follow-up studies [46, 50, 92] have been proposed to advance model capability by developing more expressive GNN architectures. Recently, some efforts have also been devoted to redefine the training objective of GCN to learn more effective node representations. CAN [62] and [93] advocate to jointly reconstruct the network structure and node attributes under variational autoencoder framework. DGI [94] and GIC [95] suggests to learn node representations by maximizing the mutual information between local node representation and the graph summary representation. GCA [96] and MVGRL [97] seek to update node representations by maximizing the agreement of representations between different views generated by data augmentation. However, they can only explore the node attributes implicitly in the message propagation process of GNN. In this paper, we propose a principled GNN variant for attributed network embedding, which allows node attributes to explicitly guide the message propagation and training objectives. ### _Attributed Network Embedding with Extra Knowledge_ In addition to plain attributed networks, including node features and homogeneous graph structure, some studies also utilize extra knowledge, such as knowledge graph (KG) [98, 99, 100, 101], to boost the model's performance. For example, PGE [102] studies how to incorporate additional edge features. KGCN [103] and KGAT [104] investigate how to leverage external knowledge graphs (e.g., item knowledge graphs) to improve recommendation quality. However, these methods require additional efforts to generate high-quality information sources, such as edge features and knowledge graphs. In contrast, in this work, we focus on standard attributed network embedding, modeling on homogeneous graphs with pure node attributes, such as continuous features and categorical features. ## 3 Problem Statement We assume that an attributed network denoted by \(\mathcal{G}=(\mathcal{V},\mathbf{A},\mathbf{X})\) is given. It has \(n\) vertexes collected in set \(\mathcal{V}\). These \(n\) nodes are connected by an undirected network with its adjacency matrix denoted as \(\mathbf{A}\in\mathbb{R}^{n\times n}\). If there is an edge between nodes \(v_{i}\) and \(v_{j}\), then \(\mathbf{A}_{ij}=1\); otherwise, \(\mathbf{A}_{ij}=0\). Besides the network \(\mathbf{A}\), each node \(v\) also has a descriptive feature vector \(\mathbf{x}_{v}\in\mathbb{R}^{m}\), known as node attributes, where \(m\) is the total number of attribute categories. We denote the neighbor set of node \(v\) as \(\mathcal{N}_{v}\), and represent the neighbor set of attribute category \(\delta_{j}\) as \(\mathcal{N}_{\delta_{j}}\) Fig. 1: Instead of inferring the node-to-node links by using node attributes as side features, CONN targets at jointly learning the network structure and node attributes in a unified latent space under graph neural network framework. i.e., \(\mathcal{N}_{\delta_{j}}=\{u|\mathbf{X}_{uj}>0,\text{ for }u\in\mathcal{V}\}\). We employ a diagonal matrix \(\mathbf{D}=\text{diag}(d_{1},\cdots,d_{n})\) to denote the degree matrix, where \(d_{i}=\sum_{j}\mathbf{A}_{ij}\). The main symbols are listed in Table I. To study the GNNs in an unsupervised setting, we follow the literature [23, 105] and formally define the problem of attributed network embedding in Definition 1. _Definition 1_.: **Attributed Network Embedding**. Given an attributed network \(\mathcal{G}=(\mathcal{V},\mathbf{A},\mathbf{X})\), the goal is to learn a \(d\)-dimensional continuous vector \(\mathbf{h}_{v}\in\mathbb{R}^{d}\) for each node \(v\in\mathcal{V}\), such that the topological structures in \(\mathbf{A}\) and side information characterized by \(\mathbf{X}\) could be preserved in the embedding representations \(\mathbf{H}\). The performance of this learning task is evaluated by applying \(\mathbf{H}\) to various downstream tasks such as node classification and link prediction. To perform attributed network embedding, GNN models [15, 23] learn the embedding representation of each node \(v\) by aggregating the representations of itself and its neighbors \(\mathcal{N}_{v}\) in the previous layer. A typical neighborhood aggregation mechanism [23] is expressed as below, \[\mathbf{h}_{v}^{(k)}=\frac{1}{1+d_{v}}\mathbf{h}_{v}^{(k-1)}+\sum_{u\in \mathcal{N}_{v}}\frac{\mathbf{A}_{vu}}{\sqrt{(1+d_{v})(1+d_{u})}}\mathbf{h}_{ u}^{(k-1)}, \tag{1}\] where \(\mathbf{h}_{v}^{(k)}\) denotes the hidden representation of node \(v\) at the \(k^{\text{th}}\) layer of GNNs, and \(\mathbf{h}_{v}^{(0)}=\mathbf{x}_{v}\). The final output is \(\mathbf{h}_{v}^{(K)}\), where \(K\) is the maximum number of layers considered. The top subfigure in Figure 1 illustrates this traditional neighborhood aggregation by using a toy example. We could see that, by stacking two layers, the representations of two-hop neighbors \(5\) and \(6\) could be accessed by node \(v=1\). To train this model in an unsupervised manner, a widely-adopted [15] graph-based loss function is defined as, \[\mathcal{L}=-\sum\nolimits_{v\in\mathcal{V}}\sum\nolimits_{u\in\mathcal{N}_ {v}}\log\frac{\exp(\text{sim}(\mathbf{h}_{v}^{(K)},\mathbf{h}_{u}^{(K)}))}{ \sum_{u^{\prime}\in\mathcal{V}}\exp(\text{sim}(\mathbf{h}_{v}^{(K)},\mathbf{h }_{u^{\prime}}^{(K)}))}. \tag{2}\] sim\((\cdot,\cdot)\) is a similarity function, e.g., inner product. The goal is to make the representations of connected nodes similar to each other, while enforcing the representations of unconnected nodes to be distinct. We observe that, node attributes are employed only as the initial representations \(\mathbf{h}_{v}^{(0)}\). They have not been further exploited, especially being integrated into the core mechanisms of GNNs. ## 4 Collaborative Graph Convolution Node attributes are informative, and significantly correlated with and complementary to the network [106, 107, 108, 60, 109]. Since they have not been fully exploited in GNNs, we explore to deeply integrate node attributes into the core mechanisms of GNNs, and develop a novel framework named COllaborative graph Neural Network (CONN). Figure 1 depicts the two major components of CONN, i.e., collaborative neighborhood aggregation and collaborative training objective. First, we redefine the graph convolutions by considering node attribute categories \(\mathcal{U}\) as another set of nodes. As illustrated in Figure 1, we augment the original network \(\mathbf{A}\) to a new one \(\mathbf{\widetilde{P}}\) with \(n+m=6+4\) nodes. It preserves all edges in \(\mathbf{A}\), and contains a link from nodes \(v\) to \(\delta_{j}\) if \(v\) has a non-zero value in its node attributes \(\mathbf{X}_{vj}\). Notice that \(\mathbf{X}_{vj}\) can be both positive or negative, where negative value means the neighbor has negative impact towards the anchor node. Based on \(\mathbf{\widetilde{P}}\), we perform neighborhood aggregation. For example, the first-order neighbors of node \(1\) have been augmented from \(\{2,3,4\}\) to \(\{2,3,4,\delta_{1},\delta_{2}\}\), while the second-order neighbors of node \(1\) have been augmented from \(\{1,5,6\}\) to \(\{1,2,4,5,6,\delta_{1},\delta_{2},\delta_{3}\}\). We observe that our collaborative neighborhood aggregation could not only capture node-to-node interactions, but also node-to-attribute-category interactions. Second, to train our model, we design a collaborative loss. The goal is to collaboratively predict all links in the augmented network \(\mathbf{\widetilde{P}}\), which incorporates \(\mathbf{A}\) and \(\mathbf{X}\). Additionally, we design a novel cross correlation mechanism to model the complex interactions between any pair of nodes in \(\mathbf{\widetilde{P}}\) (e.g., \(1\in\mathcal{V}\) and \(\delta_{1}\in\mathcal{U}\) in Figure 1). It employs not only the node representations in the last layer \(\mathbf{h}_{v}^{(K)}\), but also all the remaining ones \(\{\mathbf{h}_{v}^{(k)},\text{ for }k=0,1,\ldots,K-1\}\). We now introduce the details in the following subsections. ### _Collaborative Neighborhood Aggregation_ Given an attributed graph \(\mathcal{G}=(\mathcal{V},\mathbf{A},\mathbf{X})\), existing GCN architectures mainly define the multiple-hop neighbors of node \(v\in\mathcal{V}\) purely based on the network structure \(\mathbf{A}\). The first-order neighbors of node \(v\) is represented by \(\mathcal{N}_{v}=\{u|\mathbf{A}_{uv}=1\}\), while the second-order neighbors is denoted by \(\mathcal{N}_{v}^{(2)}=\{u|\mathbf{A}_{uv}^{2}>0\}\), so on and so forth. \(\mathbf{A}^{k}\) indicates the \(k^{\text{th}}\) power of \(\mathbf{A}\). As a result, other than serving as the initial node representations, i.e., \(\mathbf{h}_{v}^{(0)}=\mathbf{x}_{v}\), node attributes are excluded from the graph convolutions, i.e., the core operation of GNNs. As discussed in Introduction section, this could be suboptimal for inactive nodes with few or no neighbors in practice, since they don't have sufficient neighborhood information for GNNs to effectively learn their embedding representations [56]. #### 4.1.1 Augmented network To tackle the aforementioned issue, we propose to leverage the geometrical property of node attributes. We regard each node attribute category \(\delta_{j}\in\mathcal{U}\) as a new node and node attributes \(\mathbf{X}\) as a weighted bipartite graph. By adding it into the original network \(\mathbf{A}\), we would get an augmented \begin{table} \begin{tabular}{c|l} \hline **Notation** & **Description** \\ \hline \(\mathbf{A}\in\mathbb{R}^{n\times n}\) & adjacency matrix \\ \(\mathbf{X}\in\mathbb{R}^{n\times m}_{v}\) & a matrix collects all node attributes \\ \(\mathcal{V}\) & a set collects all \(m\) nodes \\ \(\mathcal{U}\) & a set collects all \(m\) attribute categories \\ \(\delta_{j}\in\mathcal{U}\) & the \(j^{\text{th}}\) node attribute category \\ \(\mathcal{N}_{v}\) & a set collects adjacent neighbors of \(v\) \\ \(K\) & the number of graph convolutional layers \\ \(\mathbf{\widetilde{P}}\in\mathbb{R}^{(n+m)\times(n+m)}\) & adjacency matrix of augmented network \\ \(\mathbf{h}_{v}^{(k)}\in\mathbb{R}^{1\times d}\) & representation of node \(v\) at the \(k^{\text{th}}\) layer \\ \(\mathbf{h}_{g_{j}}^{(k)}\in\mathbb{R}^{1\times d}\) & representation of \(\delta_{j}\) at the \(k^{\text{th}}\) layer \\ \hline \end{tabular} \end{table} TABLE I: Summary of notations in this paper. network, denoted as \(\mathbf{P}\). Mathematically, its adjacency matrix is written as, \[\mathbf{P}=\left[\begin{array}{cc}\mathbf{A}&\mathbf{X}\\ \mathbf{X}^{\top}&\mathbf{0}\end{array}\right]\in\mathbb{R}^{(n+m)\times(n+m)}. \tag{3}\] Eq.(9) is applicable for both categorical and continuous node attributes. For continuous features, however, it would generate a dense bipartite graph based on \(\mathbf{X}\), which may significantly increase the computation costs when performing convolution on it. To save the computation, we empirically simplify the dense bipartite graph into a sparse one by only preserving the top-\(N\) values in each row of \(\mathbf{X}\). An appropriate \(N\) value acts as a trade-off between the efficiency and accuracy, we analyze its impact in Section 5.8. In summary, we directly use the feature matrix \(\mathbf{X}\) of categorical attributes to construct the augmented graph, while adopting top-\(N\) values in each row of \(\mathbf{X}\) to generate a sparse graph for continuous features. #### 4.1.2 Augmented multi-hop neighbors By using \(\mathbf{P}\), the first-order and second-order neighbors of node \(v\) can be expanded as: \[\mathcal{N}_{v}=\{u|\mathbf{A}_{vu}=1\}+\{\delta_{j}|\mathbf{X}_{vj}\neq 0\}, \tag{4}\] \[\mathcal{N}_{v}^{(2)}=\{u|\mathbf{A}_{vu}^{2}>0\text{ or }( \mathbf{X}\mathbf{X}^{\top})_{vu}>0\}+\{\delta_{j}|(\mathbf{A}\mathbf{X})_{vj} >0\},\] where \((\mathbf{X}\mathbf{X}^{\top})\in\mathbb{R}^{n\times n}\) collects all node pairs that share at least one node attribute category, i.e., all \(v\rightarrow\delta_{j}\to u\) paths. \((\mathbf{A}\mathbf{X})\in\mathbb{R}^{n\times m}\) implies node-to-attribute-category interactions reflected by \(v\to u\rightarrow\delta_{j}\) paths. Compared with traditional GNNs, we explicitly model the node-to-attribute-category interactions within \(K\) hops. Original node-to-node interactions have been enriched by paths passing through \(\delta_{j}\in\mathcal{U}\). Similarly, the first and second-order neighbors of \(\delta_{j}\in\mathcal{U}\) could be computed as, \[\mathcal{N}_{\delta_{j}}=\{u|\mathbf{X}_{uj}\neq 0\}, \tag{5}\] \[\mathcal{N}_{\delta_{j}}^{(2)}=\{u|(\mathbf{A}\mathbf{X})_{uj}>0 \}+\{\delta_{i}|(\mathbf{X}^{\top}\mathbf{X})_{ji}>=1\},\] where \((\mathbf{X}^{\top}\mathbf{X})\in\mathbb{R}^{m\times m}\) denotes the attribute category correlations estimated by their common nodes. It collects \(\delta_{j}\to u\rightarrow\delta_{i}\) paths. We enable nodes in \(\mathcal{V}\) to propagate messages to attribute categories, and correlations among attribute categories to affect the graph convolution. Based on this, we can not only enrich node interactions using attribute-category as intermediate to improve model performance (see Table III and IV), but also enhance model robustness _w.r.t._ missing edges as shown in Section 5.8. Another thing we want to remark is that different from previous efforts [110, 111] that define a node-to-node similarity network based on feature distance, we directly build a node-to-attribute-category bipartite graph on feature matrix by using attribute values as edge weights. As analyzed before, a bipartite graph between node and attribute-category can not only enrich or rebuild node interactions using attribute as intermedia, but also preserve feature information as much as possible. #### 4.1.3 Collaborative aggregation We now illustrate how to learn node representations based on the augmented network \(\mathbf{P}\). Given the recent advances in heterogeneous graph embedding [112, 113, 114], the intuitive solution is to apply these well-established heterogeneous GNNs to learn embeddings from \(\mathbf{P}\) (it consists of two objects (node and attribute-category) and two relations (node-to-node and node-to-attribute-category)). However, our preliminary experiments show that these methods perform not good on our _synthetic heterogeneous graph_ (See discussion in 5.8.4), since they may over-emphasize the heterogeneity between node and its attributes, making the learning process substantially complex. Since standard GNNs architectures [62, 50, 66, 92] could already achieve high performance on attributed networks by looking them as homogeneous graph, we propose to follow the tradition and regard the augmented network \(\mathbf{P}\) as homogeneous resource with simple weight schema. We leave attributed network embedding from heterogeneous GNNs perspective as our future work. Specifically, our essential idea is to treat node vertex and attribute-category vertex as identical vertices but do provide a weight hyperparameter \(\alpha\) to control the information diffusion between the network \(\mathbf{A}\) and node attributes \(\mathbf{X}\). It is because the importance of node-to-node interactions and node attributes are not explicitly available. In our setting, the binary values in \(\mathbf{A}\) might not be compatible with the feature values in \(\mathbf{X}\) (e.g., continuous features), we define a refined transition probability matrix as follow. \[\widetilde{\mathbf{P}}=\left[\begin{array}{cc}\alpha\widetilde{\mathbf{A}}&( 1-\alpha)\widetilde{\mathbf{X}}\\ (1-\alpha)\widetilde{\mathbf{X}}^{\top}&\alpha\mathbf{I}\end{array}\right], \tag{6}\] where \(\widetilde{\mathbf{A}}\) and \(\widetilde{\mathbf{X}}\) denote the normalization of \((\mathbf{A}+\mathbf{I})\) and \(\mathbf{X}\) after applying \(\ell_{1}\) norm to normalize each row respectively. \(\alpha\in[0,1]\) is a trade-off hyper-parameter to impose our inductive bias about the importance of network structure and node attributes. Specifically, when \(\alpha=1\), it yields to a vanilla graph convolutional operation purely based on the network structure. As \(\alpha\) increases, node representations would be more dependent on node attributes, and node-to-attribute-category interactions and attribute category correlations will be gradually incorporated into the graph convolution process. Based on \(\widetilde{\mathbf{P}}\), standard graph convolutional layers can be directly applied to update node representations. We follow a simple GNN model [54], and update the corresponding embedding matrix with a simple sparse matrix multiplication. We use \(\mathbf{H}^{(k)}\in\mathbb{R}^{(n+m)\times d}\) to denote the intermediate representation of all \((n+m)\) nodes in the \(k^{\text{th}}\) layer. Mathematically, it could be written as, \[\mathbf{H}^{(k)}\leftarrow\widetilde{\mathbf{P}}^{K}\mathbf{H}^{(0)}. \tag{7}\] Our initial node representations \(\mathbf{H}^{(0)}\in\mathbb{R}^{(n+m)\times d}\) are not based on \(\mathbf{X}\). Instead, \(\mathbf{H}^{(0)}\) is a trainable embedding matrix that is randomly initialized following common protocols [58, 114]. To further illustrate the correlation between \(\mathbf{h}_{v}^{(k)}\) and \(\{\alpha,\mathbf{A},\mathbf{X}\}\), we rewrite the corresponding update rule as follows, \[\mathbf{h}_{v}^{(k)}= \alpha\widetilde{\mathbf{A}}_{vv}\mathbf{h}_{v}^{(k-1)}+\alpha \sum\nolimits_{u\in\mathcal{N}_{v}}\widetilde{\mathbf{A}}_{vu}\mathbf{h}_{u}^{( k-1)} \tag{8}\] \[+(1-\alpha)\sum\nolimits_{\delta_{j}\in\mathcal{N}_{v}}\widetilde{ \mathbf{X}}_{vj}\mathbf{h}_{j}^{(k-1)}.\] Eq. (8) provides a principled solution to utilize and control node attributes. On the one hand, it can explicitly enrich or replenish node interactions by treating attribute-category as additional node. On the other hand, neighborhood information from node and attribute-category are selectively combined via trade-off parameter \(\alpha\). ### _Collaborative Training Objective_ Given the updated representations \(\mathbf{H}^{(k)}\) for \(k=0,1,\ldots,K\), we need an unsupervised objective to train the GNN model. A widely-adopted approach [14, 15, 62] is to employ the node representations at the last layer \(\mathbf{H}^{(K)}\) to reconstruct all edges. As illustrated in Eq. (2), it estimates the probability of two nodes \(v\) and \(u\) being connected based on the similarity between their vectors \(\mathbf{h}_{v}^{(K)}\) and \(\mathbf{h}_{u}^{(K)}\). This approach has been demonstrated to be effective in plain networks, but node attributes are often available in practice. Eq. (2) could not directly incorporate node attributes. Another intuitive solution [14] is to employ autoencoders to reconstruct both \(\mathbf{A}\) and \(\mathbf{X}\). It would achieve suboptimal performance because the topological structures \(\mathbf{X}\) are heterogeneous with node attributes \(\mathbf{A}\). #### 4.2.1 Cross correlation mechanism To cope with aforementioned issues, we propose a novel cross correlation mechanism to model and predict the complex node-to-node and node-to-attribute-category interactions. It has two major steps. First, we remove all weights in \(\mathbf{P}\) and convert it into a binary matrix. We target at integrating the network and node attribute into our training objective. To make them compatible with each other, we define a binary adjacency matrix as, \[\mathbf{P}^{*}=\left[\begin{array}{cc}\mathbf{A}&\mathbf{X}^{*}\\ \mathbf{X}^{*\top}&\mathbf{0}\end{array}\right]\in\mathbb{R}^{(n+m)\times(n+m)}, \tag{9}\] where \(\mathbf{X}^{*}_{v\delta_{j}}=1\) if \(\mathbf{X}_{v\delta_{j}}>0\). Our goal is to recover the node-to-node and node-to-attribute-category interactions in \(\mathbf{P}^{*}\). Second, the node representations \(\mathbf{H}^{(k)}\) for \(k=0,1,\ldots,K\) at all layers are learned based on different orders of neighbors. One merit of our model is that we have integrated node attributes \(\mathbf{X}\) into the collaborative neighborhood aggregation and no longer need to employ \(\mathbf{X}\) as the initial node representations \(\mathbf{H}^{(0)}\). Since we are flexible to define \(\mathbf{H}^{(0)}\), we could make the dimensions of all \(\{\mathbf{H}^{(k)}\}\) the same. In such a way, we could easily take full advantage of them, and model the complex interaction from node \(v\) to node \(u\) or node \(v\) to node attribute category \(\delta_{j}\) as, \[\begin{split}\mathbf{y}_{vu}=\text{MLP}(||_{k=0}^{K}||_{i=0}^{ K}\mathbf{h}_{v}^{(k)}\odot\mathbf{h}_{u}^{(i)}),\\ \mathbf{y}_{v\delta_{j}}=\text{MLP}(||_{k=0}^{K}||_{i=0}^{K} \mathbf{h}_{v}^{(k)}\odot\mathbf{h}_{j}^{(i)})\end{split} \tag{10}\] where \(||\) indicates the concatenation operation. \(\mathbf{h}_{v}^{(k)}\odot\mathbf{h}_{u}^{(i)}\in\mathbb{R}^{n\times d}\) denotes the correlation feature between the under-\((k+1)^{\text{th}}\)-order neighborhood of node \(v\) and the under-\((i+1)^{\text{th}}\)-order neighborhood of node \(u\). \(\odot\) represents the element-wise multiplication. \(\mathbf{y}_{vu}\in\mathbb{R}\) and \(\mathbf{y}_{v\delta_{j}}\in\mathbb{R}\) are the predicted scores for pairs \((v,u)\) and \((v,\delta_{j})\), respectively. MLP denotes a three-layer multilayer perceptron with a Relu activation function. It is worth noting that the element-wise operation between node representations has been explored in KGAT [104] and NGCF [115] for the recommendation. Our method differs in two ways. First, KGAT and NGCF utilize this technique to facilitate message propagation in each GNN layer (for node-level embedding). Yet, CONN applies it to generate edge representations of two end nodes from different granularities. Second, the proposed cross-correlation mechanism is far more element-wise. Its novelty lies in the proposal to obtain an informative edge representation of end nodes by integrating their cross-correlations from different combinations of the GNN layers. Therefore, the proposed cross-correlation layer is different from KGAT and NGCF because it aims to improve the quality of edge representation while the referenced methods work on boosting the representations of each node per GNN layer, from which they are orthogonal to us and can be incorporated as the base GNN backbone. #### 4.2.2 Analysis The subfigure on the bottom of Figure 1 illustrates the key idea of the proposed cross correlation mechanism. Given any two entities (e.g., node \(v=1\) and \(\delta_{1}\) in Figure 1), we aim to capture the second-order correlations across all \(K+1\) embedding representations of the two entities. So, in total, Eq. (10) has concatenated \((K+1)^{2}\) correlation features (e.g., 9 in Figure 1), which is a small number. It should be noted that the element-wise multiplication [116] would highlight the shared patterns within the two input node representations, e.g., \(\mathbf{h}_{v}^{(k)}\) and \(\mathbf{h}_{u}^{(i)}\), while eliminating some inconsistent and noisy information. #### 4.2.3 Optimization and final representation In our collaborative training objective, the goal is to reconstruct the node-to-node interactions in \(\mathbf{A}\) by using \(y_{vu}\) and the node-to-attribute-category interactions in \(\mathbf{X}^{*}\) by using \(y_{v\delta_{j}}\). The corresponding objective function is defined as follows. \[\begin{split}\mathcal{L}=&-\sum\nolimits_{v\in V} \sum\nolimits_{z\in\mathcal{N}_{v}}\log\frac{\exp(\mathbf{y}_{vz})}{\sum \nolimits_{z^{\prime}\in\mathcal{V}\cup\mathcal{U}}\exp(\mathbf{y}_{vz^{\prime }})}\\ &-\sum\nolimits_{\delta_{j}\in\mathcal{U}}\sum\nolimits_{u\in \mathcal{N}_{\delta_{j}}}\log\frac{\exp(\mathbf{y}_{\delta_{j}u})}{\sum \nolimits_{u^{\prime}\in\mathcal{V}\cup\mathcal{U}}\exp(\mathbf{y}_{\delta_{j}u^ {\prime}})}.\end{split} \tag{11}\] Eq. (11) is usually intractable in practice, because the sum operation of the denominator is computationally prohibitive. Therefore, we employ the negative sampling [15, 71] strategy to accelerate the optimization. After we have trained the model CONN by using the collaborative objective in Eq. (11), we need to define the final embedding representation of nodes to perform the downstream tasks. For link prediction task, we directly predict the probability of linking based on \(\mathbf{y}_{vu}\) for a node pair \((v,u)\). For node classification, we adopt the output from the last layer \(\mathbf{H}^{(K)}\) as node embedding, in which off-the-shelf classification algorithms could directly use it to classify. ### _Comparison with Prior Work_ To the best of our knowledge, few efforts have been devoted to leverage node attributes to redefine the core components of embedding methods for graphs. We roughly divide them into two categories and analyze the difference below. **Random-walk based approach.** Random-walk based methods focus on conducting truncated random walks to generate node sequences, and then applying Skip-gram algorithm to learn node representations based on the sequences. This approach is initially not applicable for node attributes. ANRL [60] addresses this issue by modifying the loss function of Skip-gram to depend on node attributes. Featwalk [59] suggests to conduct an attribute-aware random walks to inject node attributes in the random walk generation process. Compared with these methods, our model belongs to graph neural network approach that takes advantage of graph convolutional networks to explicitly model local graph structure. **Graph neural network based approach.** This line of methods focuses on exploiting graph convolutional networks [23] to model the local subgraph of an anchor node for node representation. It is natural to cope with attributed graphs by using node attributes as initial node features in the first layer. However, such approach is rather limited in exploiting node attributes as they are excluded from the two crucial components of GCN, i.e., neighborhood message propagation and the training objective. CAN [62] attempts to alleviate this problem by jointly reconstructing node attributes and network structure under variational autoencoder. In contrast, our model focuses on exploiting node attributes to impact the two building blocks jointly. Besides, we also want to remark that the utilization of node attributes in our work is different from skip connection trick [117]. Skip connection targets to skip some higher-order neighbors in deeper layers by looking back initial features, but our focus is to redefine the neighborhood set of nodes in different orders by regarding attribute categories as "neighbor". Namely, our model introduces additional node-to-attribute-category relations in the message aggregation process of each layer. In a summary, skip connection is complementary with our model and can be added to our GNN backbone to avoid over-smoothing. **Heterogeneous network embedding.** Thanks to our proposal in constructing the augmented network between nodes and attribute categories in Section 4.1.1, we can also regard the resultant embedding task as a special heterogeneous network embedding (HNE) problem [112, 113, 118, 119], where we have two object types (node and attribute category) and two relation types (node-to-node and node-to-attribute-category). Under this principle, state-of-the-art heterogeneous models might be applied. However, we found that such approach may make the learning task unnecessarily complex, since HNE will over-emphasize the heterogeneity of attributed networks. This may be suboptimal because attributed networks are not truly "heterogeneous" graphs. Moreover, it's nontrivial to train the model well, because no features are available for the second object-attribute-category. Also, it's hard to control the importance of two relations or node types in a mixed manner, such as mixed random-walk [112]. In contrast, by regarding the augmented network as homogeneous graph in our setting, the problem itself is substantially simplified and arbitrary standard GNNs architectures can be used in the plug-and-play fashion, equipping our proposal with broader applicability and practicability. To summarize, we propose a new alternative approach for GNNs based attributed network embedding by regarding attribute category as additional nodes and converting attributed network into an augmented graph. The augmented graph offers flexible information diffusion between nodes and attribute categories without information loss. To effectively learn node representations from the new graph, we develop two tailored designs: collaborative aggregation and cross-correlation mechanism, where the former helps to explicitly control the information propagation between nodes and node attributes in convolution, while the latter improves model's reconstruction capacity using multi-granularity features. Although our proposal seems general and is applicable for both homogeneous GNNs and heterogeneous GNNs, we empirically found that it's nontrivial to learn node representations based on heterogeneous GNNs (See discussion in 5.8.4). Therefore, we focus on the simple case and leave attributed network embedding based on heterogeneous GNNs as the future work. ## 5 Experiments We analyze the effectiveness of CONN on multiple real-world datasets with various scales and types. Specifically, our evaluation centers around three questions. * **Q1:** Compared with the state-of-the-art embedding methods, can CONN achieve better performance in terms of node classification and link prediction tasks? * **Q2:** There are three crucial components, i.e., graph mixing convolutional layer, graph correlation layer, and collaborative optimization, in CONN, how much does each component contribute? * **Q3:** What are the impacts of hyperparameters: the trade-off parameter \(\alpha\) and embedding dimension \(d\), on CONN? ### _Datasets_ We conduct experiments on six publicly available attributed networks of various scales and types. Their statistical information is summarized in Table II. **Pumbed**[120]. It is the biggest benchmark citation network used in [23]. Nodes correspond to documents and edges correspond to citations. Each node has a bag-of-words feature vector according to the paper abstract. Labels are defined as the academic topics. **ACM**[121]. It is a large-scale citation network consisting of 48,579 papers published in ACM. Words in the paper abstracts are adopted as node attributes based on the bag-of-words model. Citation links are treated as edges. Each \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline & \(|\mathcal{V}|\) & \# Edge & \(|\mathcal{U}|\) & \(\#Label\) \\ \hline Pubmed & \(19,717\) & \(44,338\) & \(500\) & \(3\) \\ ACM & \(48,579\) & \(119,974\) & \(10,000\) & \(9\) \\ BlogCatalog & \(5,196\) & \(171,743\) & \(8,189\) & \(6\) \\ ogbr-ariv & \(169,343\) & \(1,166,243\) & \(128\) & \(40\) \\ Reddit & \(232,965\) & \(1,666,919\) & \(602\) & \(41\) \\ ogbl-collab & \(235,868\) & \(1,285,465\) & \(128\) & \(-\) \\ \hline \hline \end{tabular} \end{table} TABLE II: Statistics of the datasets. paper is published under a specific area which serves as label for classification. **BlogCatalog**[1]. It is a social network collected from a blog community. Nodes are web users and edges indicate the user interactions. Node attributes denote the keywords of their blogs. Each user could register his/her blogs into six different predefined classes, which are considered as class labels for node classification. **Reddit**[15]. It is another social network dataset collected from the online discussion forum-Reddit. Nodes correspond to the Reddit posts and edges represent the co-comment relationships. The posts are preprocessed into 602-dimensional feature vectors via Glove CommonCrawl world embedding [122]. Hence, node attributes refer to the 602 latent dimensions. We use the communities or'subreddit' that the post belongs to a target label. **ogbl-collab**[123]. It is a challenging author collaboration network from KDD Cup 2021. Each node is an author and edges indicate the collaboration between authors. All nodes come with 128-dimensional features, obtained by averaging the word embeddings of papers that are published by the authors. It is widely used to conduct link prediction task. **ogbn-arxiv**[123]. It is a large-scale paper citation network of arXiv papers from KDD Cup 2021. Each node is an arXiv paper and the edge indicates that one paper cites another one. Each paper is represented by a 128-dimensional feature vector obtained by averaging the embeddings of words in its title and abstract. The target is to predict 40 subject areas. ### _Baseline Methods_ To validate the effectiveness of CONN, we include four categories of unsupervised baselines as follows. First, to study why we need tailored framework to incorporate node attributes into GCN-architectures, we compare with vanilla GCN methods GAE [14]. Second, to investigate how effective is CONN compared with other tailored solutions, we include two recent works, i.e., CAN [62] and FeatWalk [59]. Third, to have a comprehensive evaluation with state-of-the-art unsupervised models, we include three popular self-supervised learning based GNN methods, DGI [94], GIC [95], and GCA [96]. Note that, other non-GCN based embedding methods are not included, i.e., DeepWalk [58], LINE [71] and ANRL [60], since they are outperformed by CAN and FeatWalk in their experiments [59, 62]. Besides, other classical GNNs architectures, i.e., GAT [46], SGC [54], and APPNP [61], are not included for comparison, since they are initially dedicated for supervised learning while we focus on unsupervised representation learning. * **GAE**[14]. It learns node embeddings by reconstructing the network structure under the autoencoder approach. Specifically, it employs graph convolutional network to encode a subgraph into latent space. * **ARGE**[124].It is an adversarially regularized GAE. We do not consider the variational version since the two variants perform quick similar in most cases. * **DGI**[94]. It learns node embeddings by maximizing the mutual information between the local patch representation and the global graph representation. * **GIC**[95]. It updates DGI by leveraging cluster-level node representation for unsupervised representation learning. * **GCA**[96]. It learns node representations by minimizing the contrastive loss between the original graph and its augmented forms. * **CAN**[62]. It learns node embeddings by reconstructing both the network structure and attribute matrix under the variational autoencoder framework. * **FeatWalk**[59]. It advances vanilla random-walk based methods via introducing an attribute-enhanced random walk strategy, which helps to generate diversified random walks for representation learning. * **DSGC**[125]. It defines a \(k\)-NN graph based on node features and then uses it as an attribute-aware graph filter for network embedding. Besides, we also introduce three variants to validate the effectiveness of core components in CONN. * **CONN-gcn**. It replaces the graph mixing convolutional layer with vanilla graph convolutional layer, to verify the effectiveness of modeling mixed neighbors, i.e., nodes and attributes. * **CONN-inner**. It excludes the graph correlation layer and utilizes the inner-product to estimate the similarity between two nodes based on their last layer representations similar to [14, 15]. We use it to verify the \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Pubmed} & \multicolumn{2}{c}{ACM} & \multicolumn{2}{c}{BlogCatalog} & \multicolumn{2}{c}{Reddit} & \multicolumn{2}{c}{ogbn-arxiv} \\ \cline{2-10} & F1-micro & F1-macro & F1-micro & F1-macro & F1-micro & F1-macro & F1-micro & F1-macro & F1-macro \\ \cline{2-10} GAE & \(0.825\) & \(0.819\) & \(0.710\) & \(0.611\) & \(0.636\) & \(0.632\) & \(0.624\) & \(0.467\) & \(0.614\) & \(0.401\) \\ ARGE & \(0.837\) & \(0.833\) & \(0.736\) & \(0.674\) & \(0.606\) & \(0.601\) & \(0.645\) & \(0.601\) & \(0.633\) & \(0.423\) \\ DSGC & \(0.844\) & \(0.841\) & \(0.759\) & \(0.705\) & \(0.644\) & \(0.639\) & \(0.693\) & \(0.611\) & \(0.643\) & \(0.426\) \\ DGI & \(0.857\) & \(0.856\) & \(0.768\) & \(0.709\) & \(0.753\) & \(0.750\) & \(0.605\) & \(0.418\) & \(0.646\) & \(0.415\) \\ GIC & \(0.859\) & \(0.857\) & \(0.754\) & \(0.698\) & \(0.773\) & \(0.780\) & \(0.622\) & \(0.433\) & \(0.628\) & \(0.427\) \\ GCA & \(\mathbf{0.864}\) & \(\mathbf{0.860}\) & \(0.757\) & \(0.702\) & \(0.815\) & \(0.836\) & \(0.752\) & \(0.639\) & \(0.655\) & \(0.431\) \\ FeatWalk & \(0.843\) & \(0.843\) & \(0.760\) & \(0.703\) & \(0.935\) & \(0.934\) & \(0.663\) & \(0.503\) & \(0.637\) & \(0.424\) \\ CAN & \(0.841\) & \(0.835\) & \(0.721\) & \(0.657\) & \(0.652\) & \(0.648\) & \(0.820\) & \(0.726\) & \(0.650\) & \(0.428\) \\ \hline CONN & \(\mathbf{0.866}\) & \(\mathbf{0.862}\) & \(\mathbf{0.775}\) & \(\mathbf{0.723}\) & \(\mathbf{0.945}\) & \(\mathbf{0.944}\) & \(\mathbf{0.913}\) & \(\mathbf{0.879}\) & \(\mathbf{0.674}\) & \(\mathbf{0.456}\) \\ \hline \hline \end{tabular} \end{table} TABLE IIINode classification performance. usefulness of the proposed correlation layer. * **CONN-ncoll**. It only considers the node-to-node interactions in the objective function. This variant is used to certify the contribution of jointly optimizing the node-to-node and node-to-attribute interactions. ### _Experimental Settings_ We follow the common protocol [14, 62] to evaluate the performance of CONN. The effectiveness of the learned latent representations is evaluated over two popular downstream tasks, i.e., link prediction and node classification. For link prediction task, we randomly split 85%, 10% and 5% edges in the network to form training, testing and validation sets similar to [14]. The link prediction task aims to estimate whether a missing edge in the network should be connected or not, based on its embedding representations. The performance is measured by two standard metrics, i.e., area under the ROC curve (AUC) and average precision (AP) scores. For node classification task, it targets to classify a new instance into one or multiple categories, based on the obtained node representations and the trained classifier. Specifically, we apply 5-fold cross-validation on all datasets to construct the training and test sets. To perform classification, we build an SVM classifier based on scikit-learn package and train the classifier based on the nodes in the training group and corresponding labels. Then we apply the learned classifier to predict the labels of instances in the test groups. The averaged results of five-fold is reported. We use the official released codes of baselines for experiments and use the validation set to tune their parameters. For our method, we train CONN for 100 epochs using Adam optimizer with learning rate 0.01 and early stopping with a patience of 20 epochs. If it is not specified, \(d\) is set as 128, \(K=2\), and \(\alpha\) equals to 0.2 and 0.8 for node classification and link prediction tasks, respectively. For continuous datasets (Reddit, ogbn-arxiv, and ogbl-collab), we use top-\(50\) values in each row of \(\mathbf{X}\) to construct the bipartite graph by default. The source code of CONN is available at [https://github.com/Qiaoyut/CONN](https://github.com/Qiaoyut/CONN). ### _Node Classification_ We start to evaluate the performance of CONN in node classification task (**Q1**). Table III summarizes the results on five datasets in terms of micro-average and macro-average scores. From Table III, we observe that CONN performs consistently better than other baselines across two evaluation metrics on datasets with categorical attributes (PubMed, ACM, and BlogCatalog) and continuous feature embeddings (Reddit and ogbn-arxiv). It demonstrates the effectiveness of CONN. To be specific, compared with vanilla GCN variants, CONN improves 48.6% and 9.8% over GAE on BlogCatalog and ogbn-arxiv datasets in terms of micro-average score, respectively. CONN improves 46.7% and 4.8% over ARGE on BlogCatalog and ogbn-arxiv datasets under the macro-average score, respectively. This improvement validates the necessary to design tailored GCN architecture for attributed networks. CAN is also proposed to model on node attributes, but it loses to CONN significantly on five scenarios. The major difference between them is that CAN targets to jointly reconstruct node attributes and network under auto-encoder framework, while CONN aims to revise GCN architecture by explicitly leveraging node attributes to guide message propagation. This comparison certifies that a tailored GCN solution for attributed networks is more promising and effective. FeatWalk incorporates node attributes to random-walk based models, but it is outperformed by CONN in all cases. Taking Reddit dataset for example, CONN improves FeatWalk over 37.7%. It is reasonable since our model can leverage structure information for node embedding while random-walk based methods fail to use that. Although DSGC also constructs an attribute-aware graph for network embedding, it loses to CONN in almost all cases. This is because DSGC requires to construct a \(k\)-NN graph based on node features, which may delete a lot of important attribute information. Another promising observation is that CONN performs significantly better than state-of-the-art self-supervised competitors (DGI, GIC and GCA) in general. This results indicates the effectiveness of reconstructing original network structure for unsupervised representation learning. Besides, the performance gap between CONN and FeatWalk increases on continuous-value datasets (reddit and ogn-arxiv). This is mainly because FeatWalk reduces to random choice among all attribute categories as the sampling graph is fully-connected, which damages the quality of random walks. ### _Link Prediction_ We now evaluate the performance of CONN in terms of link prediction task (**Q1**). Since DGI, GIC, GCA, and FeatWalk are not originally tested on this task, we delete the test edges from the adjacent matrix and use the resulting adjacent matrix to train them. Then, we estimate the similarity scores for test edges based on the inner product of corresponding node representations similar to [62]. Table IV reports the results on three median size (PubMed, ACM, and BlogCatalog) and two large-scale (Reddit and ogbl-collab) datasets in terms of AUC and AP scores. From the table, we can see that CONN performs significantly better than other baselines. Specifically, it improves 11.4%, 21.5%, 16.5%, 25.2%, 22.7%, 11.1%, 48.8%, and 9.7% over GAE, ARGE, DSGC, DGI, GIC, GCA, FeatWalk, and CAN on BlogCatalog in terms of AUC value, respectively. CAN loses to FeatWalk on node classification task in most cases, but outperforms FeatWalk on predicting missing edges. It indicates the importance of capturing structure information for link prediction. Both CAN and CONN target to leverage node attributes for node embedding, but CONN outperforms CAN with a great margin in all cases. The main difference is that CAN focuses on using node attributes to enrich the objective function, with the hope to enhance GCN encoder optimization. In contrast, our model directly merges useful node attributes into the GCN building blocks, such that a more powerful GCN encoder could be explicitly achieved. Although DGI, GIC, and GCA are trained based on the advanced contrastive loss function, our model performs substantially better than them with a wide margin. These results indicate the insufficiency of existing GCN-based architectures in exploiting useful node attributes. Based on these observations, we believe that our proposed framework is more suitable for the link prediction task. ### _Ablation Study_ We now investigate the second question (**Q2**), i.e., how much could CONN's three major components, i.e., graph mixing convolutional layer, graph correlation layer, and collaborative optimization contribute? Three variants, i.e., CONN-gcn, CONN-inner, and CONN-ncoll, that are introduced at the beginning, are used for this ablation study. Table V records the node classification performance on five datasets in terms of micro-and macro-average scores. Based on Table V, we have three major observations. First, without node attributes to guide message propagation in the graph convolution process, the performance of CONN-gcn decreases. It validates the effectiveness of explicitly incorporating node attributes into GCN architecture. Second, without the graph correlation layer, CONN-inner loses to CONN with great margin. For instance, CONN achieves 23.5% improvements than CONN-inner on Pubmed in terms of micro-average. The major difference between CONN-inner and CONN is that the former one adopts simple inner product to estimate edge similarity, while CONN devises deep correlation layer, which is capable of capturing the complex correlations between two nodes. This comparison verifies the effectiveness of the proposed graph correlation layer. Third, CONN performs slightly better than CONN-ncoll across five datasets. It indicates that jointly optimizing node-to-node and node-to \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{Pubmed} & \multicolumn{2}{c}{ACM} & \multicolumn{2}{c}{BlogCatalog} & \multicolumn{2}{c}{Reddit} & \multicolumn{2}{c}{ogbl-collab} \\ \cline{2-10} & AUC & AP & AUC & AP & AUC & AP & AUC & AP & AUC & AP \\ \hline GAE & 0.920 & 0.911 & 0.957 & 0.956 & 0.824 & 0.822 & 0.578 & 0.565 & 0.821 & 0.737 \\ ARGE & 0.968 & 0.971 & 0.964 & 0.969 & 0.755 & 0.723 & 0.603 & 0.597 & 0.847 & 0.814 \\ DSGC & 0.964 & 0.970 & 0.961 & 0.965 & 0.788 & 0.774 & 0.593 & 0.586 & 0.842 & 0.808 \\ DGI & 0.942 & 0.927 & 0.582 & 0.644 & 0.733 & 0.737 & 0.594 & 0.586 & 0.818 & 0.728 \\ GIC & 0.937 & 0.935 & 0.674 & 0.775 & 0.748 & 0.745 & 0.693 & 0.677 & 0.892 & 0.804 \\ GCA & 0.955 & 0.956 & 0.756 & 0.820 & 0.826 & 0.808 & 0.717 & 0.741 & 0.886 & 0.833 \\ FeatWalk & 0.940 & 0.941 & 0.963 & 0.963 & 0.617 & 0.617 & 0.852 & 0.870 & 0.828 & 0.797 \\ CAN & 0.980 & 0.977 & 0.896 & 0.899 & 0.837 & 0.837 & 0.909 & 0.903 & 0.901 & 0.886 \\ CONN & **0.994** & **0.993** & **0.986** & **0.987** & **0.918** & **0.906** & **0.986** & **0.985** & **0.930** & **0.912** \\ \hline \hline \end{tabular} \end{table} TABLE IV: Link prediction results. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{CONN-gcn} & \multicolumn{2}{c}{CONN-inner} & \multicolumn{2}{c}{CONN-ncoll} & \multicolumn{2}{c}{CONN} \\ \hline \multirow{4}{*}{F1-micro} & Pubmed & \(0.795\) & \(0.701\) & \(0.851\) & **0.866** \\ & ACM & \(0.704\) & \(0.643\) & \(0.756\) & **0.775** \\ & BlogCatalog & \(0.745\) & \(0.794\) & \(0.900\) & **0.945** \\ & Reddit & \(0.891\) & \(0.571\) & \(0.891\) & **0.913** \\ & ogbn-ariv & \(0.614\) & \(0.553\) & \(0.635\) & **0.674** \\ \hline \multirow{4}{*}{F1-macro} & Pubmed & \(0.795\) & \(0.708\) & \(0.846\) & **0.862** \\ & ACM & \(0.633\) & \(0.551\) & \(0.694\) & **0.723** \\ & BlogCatalog & \(0.740\) & \(0.787\) & \(0.898\) & **0.944** \\ & Reddit & \(0.849\) & \(0.464\) & \(0.844\) & **0.879** \\ & ogbn-ariv & \(0.408\) & \(0.388\) & \(0.415\) & **0.456** \\ \hline \hline \end{tabular} \end{table} TABLE V: Ablation study of CONN on node classification task. Fig. 2: Hyper-parameter analysis of CONN. attribute interactions is beneficial. Given that CONN outperforms three variants, it verifies that we propose a principled framework to reinforce the reciprocal effects among the three components. ### _Parameter Sensitivity Analysis_ We now study the impact of parameters \(\alpha\) and embedding dimension \(d\) over CONN on BlogCatalog (**Q3**). \(\alpha\) controls the importance of node-to-node interaction and node-to-attribute interaction for message passing. We plot the performance of CONN when \(\alpha\) varies from 0 to 1 with step size 0.1 on Figure 2-a. From the results, we observe that the performance of CONN increases as \(\alpha\) increases from 0.0 to 0.2 on five datasets. CONN obtains the best results when \(\alpha=0.2\) in general. Notice that, when \(\alpha=0\), CONN only utilizes the node attribute interactions for graph convolution, and the network structure is excluded. When \(\alpha=1\), node-to-attribute interactions are not considered. CONN reduces to the vanilla GCN, except that both node attribute and node interactions are jointly optimized. \(K\) is the number of layers for GCN backbones. Large \(K\) means that high-order neighbors are included. Since FeatWalk has no such parameter, we omit it for comparison. We vary \(K\) from 2 to 8, and the performance on BlogCatalog is shown in Figure 2-b. From the results, we observe that our model CONN consistently outperforms all baselines on the different number of layers. Another interesting observation is that when \(K\) varies from 2 to 8, the performance of CAN, GAE, GIC, GCA, and DGI decreases while CONN performs stable. It validates the superiority of our model in capturing high-order dependencies. It is worth noting that although CONN achieves relatively stable results than standard GNN models in Figure 2 (b) when K is small, e.g., 8, we do observe an obvious performance drop when K is larger, i.e., 15. Therefore, by modeling attribute-aware relationships, our method can alleviate the over-smoothing problem when the GNN model is relatively shallow to some extent. Still, tailored efforts are required to thoroughly tackle the over-smoothing issues as done in [52]. In CONN, the dimension \(d\) presents the output latent space for downstream applications, i.e., node classification and link prediction. We search it from \(\{16,32,64,128,256\}\), and the performance of all methods are depicted in Figure 2-c. We observe that different methods have different optimal \(d\), and our model CONN could achieve satisfactory performance when \(d=128\). Similar observations are made on other datasets. To make it fair, we tune the best embedding dimension for all methods and report their best results. ### _Further Analysis_ With performance comparison with SOTA methods completed, we delve into our proposal for even more insights. #### 5.8.1 **Constructing a sparse bipartite graph from continuous features is a good trade-off.** The first question we would like to answer is: what is the impact of top-\(N\) values for graphs with continuous features? Figure 3 shows the results of our model _w.r.t._ different \(N\) values on two graphs (Reddit and ogbl-collab) with embedding vectors (e.g., continuous values) as node attributes. From the figures, we can see that the performance of CONN increases with the increasing of \(N\) values, and it achieves good results when \(N\) equals the dimension of node attributes, i.e., 128 and 602 for collab and reddit, respectively. This observation sheds light on the following insights. (i) Top-\(N\) strategy is a good way to handle continuous feature vectors in our model, since \(N=50\) can already achieve satisfactory results in both cases. (ii) Furthermore, it indicates that our model can be used in high-dimensional data, since we can first adopt classical dimensional reduction techniques to reduce the dimension. To check the scalability of our model on high-dimensional data, we first adopt Deepwalk to obtain 128-d embedding vectors for nodes in ACM and BlogCatalog, and then use the resultant node features as input to train our model. We observe comparable results in two downstream tasks. Specifically, the AUC results for ACM and BlogCatalog on link prediction are 0.987 and 0.916; while the Micro-average results on node classification are 0.769 and 0.941. 8.2 **Modeling node attributes as a bipartite graph enhances the robustness of GNNs for representation learning.** Next, we explore whether modeling node attributes as an augmented graph can improve model's robustness towards noises, i.e., missing links. We analyze the robustness of our model and the vanilla GNNs competitor (GAE) towards edge perturbation, i.e., randomly masking some edges with ratios, ranging from 0.0 to 0.9 with step size 0.1. We show the results in terms of link prediction in Figure 4. Similar sensitive tendencies are observed in other datasets. The results in two figures show that our model performs relatively stable when the masking ratio is less than 50% across two datasets, while the performance of GAE drops in general when the ratio increases. We believe this stable merit is attributed to the proposed augmented bipartite graph because it can replenish missing connections between nodes via using node categories as intermediia. 8.3 **Jointly reconstructing node attributes and node interactions does not impact the convergence speed.** Moreover, we would like to examine the empirical convergence of our model. We show the training curves of CONN under different GNNs layers over two representative datasets (BlogCatalog and Reddit for categorical and continuous attributes, respectively) in Figure 5. We can Fig. 3: CONN performance _w.r.t._ different top-\(N\) values. observe that the training loss decreases very fast in the first 10 epochs, and our model tends to converge within 50 epochs in general. #### 5.8.4 **Regarding the augmented network as heterogeneous graph is suboptimal.** Finally, we investigate the generalization of our proposal in terms of heterogeneous GNNs. In particular, we want to explore whether the well-established heterogeneous GNNs efforts can be directly applied for attributed network embedding by using our proposed augmented network. To this end, we select two popular unsupervised heterogeneous GNN embedding methods as backbone: HetGNN [112] and MAGNN [126]. For a fair comparison, we initialize learnable embeddings for nodes and attribute categories similar to our model. Table VI reports the results on both node classification and link prediction tasks. We observe a clear performance gap between our CONN and two heterogeneous GNNs methods (HetGNN and MAGNN) on all evaluation scenarios. Jointly considering the results in Table III and IV, the performance of two heterogeneous variants ranks in the middle against all baselines. The possible explanation is that regarding augmented networks as a heterogeneous graph may over-emphasize the heterogeneity between node interactions and node attributes. It is hard to control the importance between them via either random walk [112] or meta-path [126]. These comparisons demonstrate our motivation to treat the augmented network as a homogeneous graph. We believe nontrivial efforts are needed to unleash the power of heterogeneous GNNs on attributed network embedding. ## 6 Conclusion In this paper, we study the problem of unsupervised node representation learning on attributed graphs under graph neural networks (GNNs). Existing GNNs efforts mainly focus on exploiting topological structures, while only using node attributes as initial node presentations in the first layer. Here, we argue that such GNN architecture is suboptimal to modeling real-world attributed graphs, since node attributes are totally excluded from the key factors of GNNs, i.e., the message aggregation mechanism and the training objectives. To tackle this problem, we propose a novel collaborative graph neural network termed CONN. It allows node attributes to determine the message-passing process for neighborhood aggregation and enables the node-to-node and node-to-attribute-category interactions to be jointly recovered. Empirical results on node classification and link prediction tasks over social and citation graphs demonstrate the superiority of CONN against state-of-the-art embedding methods. Our future work is to explore its applicability for dynamic graphs and further improve the robustness of our model by integrating adversarial training. Moreover, we are interested in exploring similar ideas in recommendation scenarios [127], such as sequential recommendation [128, 129] and session recommendation [130, 131]. ## 7 Acknowledgement We thank the anomalous reviewers for the feedback. The work is, in part, supported by NSF (IIS-2224843 and IIS-1849085).
2306.09217
Map Reconstruction of radio observations with Conditional Invertible Neural Networks
In radio astronomy, the challenge of reconstructing a sky map from time ordered data (TOD) is known as an inverse problem. Standard map-making techniques and gridding algorithms are commonly employed to address this problem, each offering its own benefits such as producing minimum-variance maps. However, these approaches also carry limitations such as computational inefficiency and numerical instability in map-making and the inability to remove beam effects in grid-based methods. To overcome these challenges, this study proposes a novel solution through the use of the conditional invertible neural network (cINN) for efficient sky map reconstruction. With the aid of forward modeling, where the simulated TODs are generated from a given sky model with a specific observation, the trained neural network can produce accurate reconstructed sky maps. Using the five-hundred-meter aperture spherical radio telescope (FAST) as an example, cINN demonstrates remarkable performance in map reconstruction from simulated TODs, achieving a mean squared error of $2.29\pm 2.14 \times 10^{-4}~\rm K^2$, a structural similarity index of $0.968\pm0.002$, and a peak signal-to-noise ratio of $26.13\pm5.22$ at the $1\sigma$ level. Furthermore, by sampling in the latent space of cINN, the reconstruction errors for each pixel can be accurately quantified.
Haolin Zhang, Shifan Zuo, Le Zhang
2023-06-15T15:52:56Z
http://arxiv.org/abs/2306.09217v1
# Map Reconstruction of radio observations with Conditional Invertible Neural Networks ###### Abstract In radio astronomy, the challenge of reconstructing a sky map from time ordered data (TOD) is known as an inverse problem. Standard map-making techniques and gridding algorithms are commonly employed to address this problem, each offering its own benefits such as producing minimum-variance maps. However, these approaches also carry limitations such as computational inefficiency and numerical instability in map-making and the inability to remove beam effects in grid-based methods. To overcome these challenges, this study proposes a novel solution through the use of the conditional invertible neural network (cINN) for efficient sky map reconstruction. With the aid of forward modeling, where the simulated TODs are generated from a given sky model with a specific observation, the trained neural network can produce accurate reconstructed sky maps. Using the five-hundred-meter aperture spherical radio telescope (FAST) as an example, cINN demonstrates remarkable performance in map reconstruction from simulated TODs, achieving a mean squared error of \(2.29\pm 2.14\times 10^{-4}\)\(\mathrm{K}^{2}\), a structural similarity index of \(0.968\pm 0.002\), and a peak signal-to-noise ratio of \(26.13\pm 5.22\) at the 1\(\sigma\) level. Furthermore, by sampling in the latent space of cINN, the reconstruction errors for each pixel can be accurately quantified. methods: data analysis - methods: numerical - techniques: imaging spec ## 1 Introduction Map-making is a critical step in radio astronomy. Before any scientific analysis, it is important to first produce pixelized maps of the observed radio sky from time-ordered data (TOD), with as much accuracy as possible. Mathematically, the reconstruction of sky map from TOD is an ill-posed inverse problem because of observational effects such as scan strategies, noise, complex geometry of the field and data excision due to RFI flagging, etc. There are several map-making methods, with the most common being maximum-likelihood (Tegmark 1997) that provides the optimal and linear solution. Usually, for solving linear systems in map-making, one use direct methods or iterative methods to achieve the solution. Direct methods are based on brute-force matrix inversion, which requires constructing and inverting the full dense matrix, and is computationally impractical for current computational power when the number of pixels of sky map is more than millions. If the system of equations is singular, then the matrix cannot be inverted, making the situation even worse. In contrast, iterative optimization methods, such as the commonly-used method of preconditioned conjugate gradients, only require a small memory footprint. However, the number of iterations required to converge to solution could become extremely large and thus the iteration methods can suffer from poor convergence rate. Meanwhile, for ill-posed problems, the derived solution may depend on the choice of the stop criterion of iterations. Additionally, fast gridding methods, such as Cygrid (Winkel et al. 2016) and HCGrid (Wang et al. 2021) with utilizing multiple CPU cores or CPU-GPU hybrid platforms, provide an alternative way for map-making. Although these methods tried to make the most of the hardware, it can not give a map reconstruction uncertainty estimate. Over recent years, machine learning algorithms, especially those based on deep neural networks, have been widely used in cosmological and astronomical studies and have achieved great success in overcoming many tasks that were previously difficult to accomplish with traditional methods such Lochner et al. (2016); Ravanbakhsh et al. (2017); Schmelzle et al. (2017); Mehta et al. (2019); La Plante & Ntampaka (2018); Caldeira et al. (2019); Modi et al. (2018); He et al. (2019); Dreissigacker et al. (2019); Pfeffer et al. (2019); Troster et al. (2019); Zhang et al. (2019); Makinen et al. (2021); Mao et al. (2020); Ni et al. (2021); Wu et al. (2021); Villaescusa-Navarro et al. (2021); Zhao et al. (2022a,b); Jeffrey et al. (2022); Shallue & Eisenstein (2022); Wu et al. (2023). Various neural network methods have been proposed to analyze inverse problems Ksoll et al. (2020); Haldemann et al. (2022); Bister et al. (2022); Kang et al. (2022), and these new data-driven methods demonstrate impressive results. In this study, we will use the multiscale conditional invertible neural network (cINN) Ardizzone et al. (2019) to solve the ill-posed inverse problem of map-making. Using a FAST-like observation (Nan et al. 2011; Li et al. 2013; Li & Pan 2016; Li et al. 2018), we validate the effectiveness of cINN in map-making and demonstrate such network provides alternative way to reconstruct the sky map from TOD with high-fidelity. The invertible neural network (INN) was first proposed by Ardizzone et al. (2018), and then was soon improved, which is called the conditional invertible neural network (cINN) (Ardizzone et al. 2019). In order to maintain the unique characteristics of INN, the architecture prohibits the use of some standard components of neural networks, such as batch normalization and pooling layers. Avoiding some fundamental limitations of INN, the cINN combines the INN model with an unconstrained feed-forward network, efficiently preprocessing the conditioning image into the most informative features. Also, cINN allows for the joint optimization of all its parameters using a stable training procedure based on maximum likelihood. This is a new class of neural networks suitable for solving inverse problems. cINN originally focus on learning the well-posed forward process (e.g., mapping the true radio sky to TODs), and use additional latent output variables to describe the information lost in the forward process. Due to the invertibility of cINN, the corresponding inverse process is implicitly learned for free through the model. In the specific map-making problem, given a specific observation and the distribution of the latent variables (usually assumed to be Gaussian), the inverse pass of the cINN provides a full posterior distribution over parameter space. This study presents a new solution for efficiently reconstructing sky maps by using a conditional invertible neural network (cINN). By generating simulated TODs from a given sky model through forward modeling, which involves drift-scan observations using the FAST configuration, including 19 beams and a frequency range of 1100-1120 MHz. The trained neural network can accurately produce reconstructed sky maps, showing good performance in reconstructing maps from simulated TODs. Moreover, the reconstruction errors for each pixel can be precisely quantified by sampling in the latent space of cINN. In Sect. 2 we briefly introduce the map-making equations and describe existing methods of map reconstruction and we give a detailed description of cINN. In Sect. 3, we give a description of the simulation and our training data. In Sec. 4, we present our results for cINN and demonstrate its good performance in map reconstruction. Finally, we list our conclusions in Sect. 5. ## 2 Methods ### Map-making for single-dish radio telescopes Map-making is a crucial step in radio observations, bridging the gap between the collected time-ordered data (TOD) and scientific analysis. For a single-dish radio telescope with a single beam, the map-making input is a series of calibrated TODs, represented by a single time-domain vector \(d\) of size \(N_{t}\) containing all antenna measurements. Each measurement at time \(t\), \(d_{t}\), is a sum of the sky signal in pixel \(p\), \(x_{p}\), and measurement noise, \(n_{t}\), with the beam convolution already applied to the sky signal. The pointing matrix, a sparse and tall (\(N_{t}\times N_{p}\)) matrix, encodes how TOD at each time \(t\) responds to each pixel \(p\) in the sky map. The TOD is modeled as: \[y_{t}=\sum_{p}A_{tp}x_{p}+n_{t}, \tag{1}\] or in the matrix-vector form as, \[y=Ax+n, \tag{2}\] where \(x\) represents the sky map to be reconstructed. The structure of the pointing matrix \(A\) reflects the For observations that involve multiple beams and frequencies, the aforementioned basic model can be expanded as \[y_{t}^{i}(\nu)=\sum_{p}A_{tp}^{i}x_{p}(\nu)+n_{t}^{i}(\nu)\,, \tag{3}\] where \(\nu\) represents the frequency being observed and the superscript \(i\) represents the index of the beam being used. In the same form as Eq. 2, we can also write the matrix form of the above equation, except that here the matrix and vectors are redefined as \(A=[A^{1},A^{2},\cdots],y=[y^{1},y^{2},\cdots],n=[n^{1},n^{2},\cdots]\). Solving Eq. 2 is equivalent to solving a system of linear equations with a large number of parameters, which is a typical linear inverse problem. Tegmark (1997) has proposed a variety of map-making methods, each with its own desired properties. The most common one is the optimal, linear solution, \(\hat{x}=\left(A^{T}WA\right)^{-1}A^{T}Wd\), which is an unbiased estimator for a positive defined weighting matrix \(W\). In particular, assuming a Gaussian distributed noise with zero mean and variance of \(N\) in the time domain, and choosing the weighting as \(W\) = \(N^{-1}\), the estimator then becomes the standard generalized least-square solution for the map with minimum variance, \[\hat{x}=H^{-1}b\,,\quad\mathrm{with}\;\;H\equiv A^{T}N^{-1}A\,,\mathrm{and}\; \;b\equiv A^{T}N^{-1}d\,, \tag{4}\] where the noise covariance matrix of the map is \(\mathcal{N}=\left(A^{T}N^{-1}A\right)^{-1}\). Since \(\left(A^{T}N^{-1}A\right)\) is generally a dense matrix, a direct brute-force inversion typically costs \(\mathcal{O}\!\left(N_{p}^{3}\right)\) flops, which is computationally intractable if \(N_{p}\sim 10^{6}\) and makes the map-making problem particularly challenging. For noise, since \(N\) is sparse in the frequency domain, we need to perform each matrix multiplication on a matrix sparse basis, transforming between the frequency and time domains by using the fast Fourier transform. Furthermore, in practice, the exact matrix inversion may not exist if a matrix is sufficient large of if the matrix is illness or rank-deficient, not sufficient large, leading to solutions that are numerically unstable. And so, one has to use the Moore-Penrose pseudoinverse or some regularization-based methods (Cheng & Hofmann, 2011) to approximate the inverse of non-invertible matrices. More practically, iterative methods have offered a efficient alternative to solve the linear system of map-making, where the class of Krylov methods are involved, such as using preconditioned conjugate gradient algorithm. Explicit inversion of the linear system matrix is avoided by iteratively obtaining successively improved solutions. The computational complexity of such methods is at most \(\mathcal{O}\!\left(N_{p}^{2}\right)\). However, a large condition number of the system matrix (the ratio of the largest to the smallest eigenvalue of a matrix) can significantly decrease the convergence rate of iterative solvers, leading to unacceptable time requirements for solutions with required accuracy. Thus one has to carefully choose a preconditioner matrix to the linear system so that the condition number of the preconditioned system becomes much smaller. In practice, the matrix is usually positive semi-defined, which is because the incomplete coverage of pixels in an observed sky area. This incompleteness generally originates from the choice of scanning strategy and the RFI subtraction in data preprocessing. Therefore, there is a null space such that \(Hx=0\), leading to a degeneracy in the estimated sky map \(\hat{x}\)(e.g., Cantalupo et al. (2010); Puglisi et al. (2018) and references therein). When applying the iterative methods to a semi-defined linear system, the iterative results will start converging towards the optimal solution, and then be hindered to start deviating from the crucial to the successfully solve for such ill-posed map-making problem. Therefore, in order to avoid the aforementioned non-trivial problem, we will propose below a novel deep learning-based approach. ### Application of neural network to map-making The inverse problem of map-making can be studied under a Bayesian framework. For a given data \(y\), the inverse problem of map-making is essentially to derive the posterior distribution, \(p(x|y)\), for the true sky map \(x\). In the context of mathematics, a forward mapping from any physical parameters \(x\) to the associated observed variables \(y\), \(f(x)\to y\), is subject to a potential loss of information, which causes degeneracies since \(y\) no longer captures all the variance of \(x\) entirely. To preserve all information about \(x\), the dedicated cINN encodes all variances of \(x\) to the latent variables \(z\) (unobservable) by learning a mapping from \(x\) to \(z\), which is conditioned on \(y\). Due to the invertible architecture of this network, cINN can provide a solution for the inverse mapping \(f^{-1}(z,y)\to x\) after learning this forward mapping, which is the key point of the cINNs to solve the inverse problem. Thus, such inverse mapping provides an estimate of the posterior distribution \(p(x|y)\) by sampling the latent variables \(z\). In principle, the distribution of \(z\) can be chosen arbitrarily, but for simplicity, we further assume that \(z\) follows a Gaussian distribution, enforced during the training process. Fig. 1 sketches the concept of the cINN methodology. In our case, this means the reconstructed sky map can be automatically retrieved by sampling the Gaussian-distributed \(z\) in the latent space via the inverted network (\(f^{-1}\)), Figure 1: Schematic overview of the conditioned invertible neural network (cINN) for solving the inverse problem of map-making. In the training process, cINN will learn how to transform the pairs \([x,y]\) to latent variables \(z\), by optimizing the forward mapping \(f(x,y)=z\), where the sky maps \(x\) and observational data \(y\) (defined as the condition in cINN) are provided by simulations with a known forward modelling as defined in Eq. 2, and serve as inputs to the network. The distribution of the latent variables \(p(z)\) is enforced to be Gaussian during the training for simplicity, although \(p(z)\) can be arbitrarily assumed. Due to the invertibility of cINN, the trained network thus provides a solution for the inverse process \(f^{-1}\) for free. When making a prediction with a new observation \(y\), cINN then transform \(p(z)\) to the posterior distribution \(p(x|y)\) via the backward mapping \(f^{-1}(z,y)=x\). This means the sky map will be reconstructed by sampling the latent variables \(z\) drawn from \(p(z)\). where \(I_{n}\) is the \(n\times n\) unity matrix with choosing \(n=dim(z)=dim(x)\). ### Neural Network Setup We will describe our new approach for map-making from TODs in this section. Our method employs a neural network architecture based on the conditional invertible neural network (cINN) introduced by Ardizzone et al. (2018). The INNs can be constructed easily using the framework for easily invertible architectures (FrEIA) based on pytorch, which is a set of INNs available at Ardizzone et al. (2018-2022), without any prior knowledge of normalizing flow. To provide context for the cINN, we will first provide a brief introduction for the invertible neural network (INN), upon which the cINN is based. #### 2.3.1 INN architecture The INNs, discussed in (Ardizzone et al. 2018), are a type of generative model belonging to the normalizing flow family. This family of models is named normalizing flow because it commonly maps input data from the original distribution to a more standard distribution, usually the normal distribution. Depending on the loss function used, the output distribution can vary. Normalizing flow models encompass a large group of models, but INNs specifically employ affine coupling layers such as RealNVP (Dinh et al. 2016) and GLOW (Kingma & Dhariwal 2018a). Compared with other flow models, INNs have three main advantages: (1) INNs are bijective; (2) the forward and backward mappings in INNs are efficient to compute; and (3) the Jacobian for the forward mapping in INNs is easy to calculate. The architecture of the INN is based on a series of reversible blocks, following the design proposed by Dinh et al. (2016). The input vector, \(u\), is divided into two halves, \(u_{1}\) and \(u_{2}\), and these blocks subsequently execute two complementary affine transformations. \[\begin{split} v_{1}&=u_{1}\odot\exp\left(s_{2} \left(u_{2}\right)\right)+t_{2}\left(u_{2}\right)\\ v_{2}&=u_{2}\odot\exp\left(s_{1}\left(v_{1}\right) \right)+t_{1}\left(v_{1}\right)\,.\end{split} \tag{6}\] Here, the use of the element-wise multiplication operator \(\odot\) and addition \(+\) is employed, where the arbitrarily complex mappings \(s_{i}\) and \(t_{i}\) of \(u_{2}\) and \(v_{1}\), respectively, are represented as any neural networks. These mappings are not mandated to possess inverse functions, as they are evaluated in a solely forward direction. The inversion of these affine transformations is easily accomplished by following, \[\begin{split} u_{2}&=\left(v_{2}-t_{1}\left(v_{1} \right)\right)\odot\exp\left(-s_{1}\left(v_{1}\right)\right)\\ u_{1}&=\left(v_{1}-t_{2}\left(u_{2}\right)\right) \odot\exp\left(-s_{2}\left(u_{2}\right)\right)\,.\end{split} \tag{7}\] By introducing the conditional invertible neural network (cINN) as an extension to their original INN method (Ardizzone et al. 2019), the affine coupling block architecture is modified to include additional conditioning inputs \(c\). As the mappings \(s_{i}\) and \(t_{i}\) are only evaluated in the forward direction, even when inverting the network, concatenating the conditioning inputs with the regular inputs of the sub-networks can be done without compromising the invertibility of INNs, e.g., replacing \(s_{2}(u_{2})\) with \(s_{2}(u_{2},c)\) in Eqs. 6 #### 2.3.2 cINN architecture After the pre-processing stage, the inputs, which are image-like data, are passed through a convolutional network in order to extract features and reduce the computational demands on the INN. The cINN architecture employed for map reconstruction, as depicted in Fig. 3, is similar to that described by Ardizzone et al. (2019). The details of the specific conditioning network (corresponding to the five polygonal components) is provided in Fig. 4. In line with Ardizzone et al. (2019), the conditional coupling blocks used in this study are from Kingma & Dhariwal (2018), which is called GLOW. Each conditional coupling block features a permutation layer that rearranges the channels and facilitates the mixing of information after the affine transformation layer has been applied. Normally, the permutation order remains fixed after initialization. However, GLOW introduces an invertible \(1\times 1\) convolution layer as a learning permutation layer. In an ideal scenario, the map resolution remains constant throughout the coupling layers. However, as high resolution maps can be demanding in terms of graphics memory, different resolution stages are employed. Prior to reducing the resolution of the map data, a downsampling layer is employed to decrease its size and increase the number of channels. The downsampling technique utilized is the Haar downsampling, similar to that employed in Ardizzone et al. (2019), which is derived from wavelet transforms and is also invertible. The type of downsampling has been found to have a significant impact on training and loss, as noted by Ardizzone et al. (2019). A splitting layer is utilized to reduce the dimensionality of the map while increasing its features. Half of the output from each split is concatenated to the latent variable \(z\), while the remaining half undergoes further processing through the next coupling block. The choice of the distribution for \(z\) can vary, with various distributions being permissible, such as the radial distribution reported in Denker et al. (2021). Nevertheless, for the purposes of convenience, the normal distribution is employed as the default choice for \(z\) in this study. In the training process, a batch size of 32 is used and the Figure 2: The INN-part of the cINN is formed by stacking multiple coupling blocks, each of which possesses an invertible forward (left) and backward (right) transform through a single conditional affine coupling block (CC). The configuration utilizes a single subnetwork to compute the outputs \(s_{i}()\) and \(t_{i}()\) for each \(i\). The left panel illustrates how the data flows through the block in the forward direction (from \(x\) to \(z\)), while the right one displays the inverted case following the affine transformations in Eqs. 6 & 7. Figure 4: Details of the conditioning network for the five polygonal components illustrated in Fig. 3, with using the Convolutional, Fully-Connected, LeakyRelu and Flatten layers. For all convolutional layers, a kernel size of 3 and padding of 1 are used. The stride size is set to 1 for the convolutional layer that preserves the input size, while a stride size of 2 is used for the convolutional layer that changes the size of the input. Figure 3: Schematic overview of the entire cINN architecture, employed for the map reconstruction. It resembles the multi-scale cINN presented in Ardizzone et al. (2019). There are 16 CC or FC at each level in my network. The conditional coupling block comprises an affine transform layer (shown in Fig. 2) and other invertible layers. The subnet, designated as \(s\), \(t\), employed in the affine transform layer is a convolutional network suited for mapping data. Meanwhile, the conditional coupling block with fully-connected layer (FC), utilizes an multilayer perceptron neuro (MLP) as its subnet, which is suitable for processing vector data. The polygon located in the lower left corner represents the conditional network, which can take the form of any convolutional neural network and serves as the conditional input for the cINN at various levels. The orange and blue lines represent invertible and non-invertible components, respectively. ### Maximum Likelihood Loss of cINNs For the training, a appropriate loss function is required and we refer to Ardizzone et al. (2019) for further details on this issue. The goal is to train a network that establishes a mapping between the distribution in the latent space and the true posterior space of physics parameters. By specifying a probability distribution of \(p(z)\), the cINN model \(f\) assigns a probability to any input \(x\), depending on the network parameters \(\theta\) and condition \(c\), by means of the probability conservation condition, \[p(x|c,\theta)=p(z)\left|\det\left(\frac{\partial z}{\partial x}\right)\right|\,, \tag{8}\] where \(z=f(x|c,\theta)\), and the Jacobian matrix \(\partial z/\partial x\) in practice is evaluated at some training sample \(x_{i}\), as \(J_{i}\equiv\det\left(\partial z/\left.\partial x\right|_{x_{i}}\right)\). Due to the specific structure of the network, the Jacobian is a triangular matrix, which greatly simplifies the calculation of the determinant and ensures its value is non-zero (see details in Ardizzone et al. (2019)). Using the Bayes' theorem, \(p(\theta|x,c)\propto p(x|c,\theta)p(\theta)\), the optimal network parameters are thus derived by minimizing the loss, which is averaged over \(m\) training datasets: \[\mathcal{L}=\frac{1}{m}\sum_{i=1}^{m}\left[-\log\left(p(x_{i}|c_{i},\theta) \right)\right]-\log\left(p(\theta)\right). \tag{9}\] Inserting Eq. 8 and adopting the standard normal distribution for variables \(z\) for simplicity, i.e., \(p(z)=\exp(-z^{2}/2)\), as well as a flat prior on \(\theta\), we obtain the maximum likelihood loss as \[\mathcal{L}=\frac{1}{m}\sum_{i=1}^{m}\left[\frac{\left\|f\left(x_{i}|c_{i}, \theta\right)\right\|_{2}^{2}}{2}-\log\left|J_{i}\right|\right]\,. \tag{10}\] We train the cINN models by minimizing such loss, yielding an estimate of the maximum likelihood network parameters \(\theta_{*}\). Using this estimate and the inverted network \(f^{-1}\), we can then obtain the posterior distribution \(p(x|c,\theta_{*})\) for a given \(c\), by sampling \(z\) from the prescribed normal distribution \(p(z)\). ## 3 Experiments In this section, the performance of map reconstruction is assessed by utilizing simulated observations. The cINN model is trained until convergence of the maximum likelihood loss is achieved for each training set. ### Simulated data sets Here, we present the experimental setup that is implemented to assess the performance of the cINN-based map-making method. The evaluation is conducted using simulated data sets that are modeled after a FAST-like experimental configuration. In order to generate TODs by using the forward modelling as described in Eq. 2, we simulated a drift-scan survey using the FAST array consisting of a 19-beam receiver, spanning a period of 25 days, from May 4th to May 28th. The survey covers a sky area of over 300 square degrees within a declination (DEC) range of \(23^{\circ}\) to \(28^{\circ}\) and a right ascension (RA) range of \(120^{\circ}\) to \(180^{\circ}\). The sky coverage is present in Fig. 5. The simulated TODs have a frequency resolution of \(\Delta\nu=1\) MHz, in the frequency range of 1100-1120 MHz. With an integration time of 1 s per beam and a total observation time of 14400 s per day, the total number When evaluating the end-to-end performance of an experiment, it is important to simulate correlated noise components, but in this study we focus only on the performance of the cINN, which depends on the mapping matrix \(A\) constructed by the scanning strategy, the beam response and noise. We thus assume that the TODs are well-calibrated, meaning that the low-frequency \(1/f\) noise in the time streams has been completely filtered out and any other non-ideal instrumental effects such as RFIs and standing waves are not considered in our simulations. As a result, the noise in the TODs is comprised of only white noise. The white noise level in the time streams is proportional to the system temperature \(T_{\text{sys}}\), and the standard deviation of the noise can be calculated as follows for a given band width \(\Delta\nu\) and integration time \(\tau\), \[\sigma_{N}=\frac{T_{\text{sys}}}{\sqrt{\Delta\nu\tau}}\,. \tag{11}\] To train our cINN model, we generate various time-ordered data (TOD) at different noise levels by altering the value of \(T_{\text{sys}}\) randomly from 0 to 25 K in 1200 realizations, with reference to the Fast-like survey. By using the HEALPix pixelization scheme with \(N_{\text{side}}=512\) for the simulation, the resulting noise levels typically yield noise standard deviations ranging from 0 to 9 mK per pixel, with an angular resolution of 6.87 arcmin. Based on the FAST configuration, the TOD simulations are performed using Equatorial coordinates for the maps convolved with a Gaussian beam, where the full width at half maximum (FWHM) is slightly frequency dependent, ranging from 4.506 to 4.584 arcmin in the frequency interval of interest. Moreover, the simulated true sky map \(x\) consists of several Galactic diffuse components such as the synchrotron and free-free emissions, and bright point sources, which are produced from the GSM model (de Figure 5: Sky coverage of a drift-scan survey in Equatorial coordinates, using the FAST array consisting of a 19-beam receiver over a 25-day period from May 4th to May 28th. Within the declination (DEC) range of \(23^{\circ}\) to \(28^{\circ}\) and a right ascension (RA) range of \(120^{\circ}\) to \(180^{\circ}\), the survey covers more than 300 square degrees. A zoom-in view of the observed sky is also shown. Additionally, to produce sufficient data samples for training the cINN model, we also employ data augmentation in pre-processing through straightforward techniques, such as randomly rotating sky patches and utilizing different noise realizations. Upon convergence of the maximum likelihood loss during training, we assess the performance of the trained cINN model on the training data. ### Data pre-possessing In preparation for training the cINN for map reconstruction. The input to the network is a 2D map of the observed sky region (\(x\)), which is interpolated from sky map using the appropriate pixelization scheme, HEALPix. The condition (\(c\)) for the cINN is usually represented by the observed TODs (\(y\)), but due to the ToDs' varying length and large data size, preprocessing is needed before they can be fed into the network. A more convenient alternative is to use a related quantity with the same length as the sky map. For testing purposes, a gridding map (let \(c=y_{\text{grid}}\)) is used, obtained by assigning TODs to their closest grid points using the histogram2D function in numpy, which is considered as a coarse, but simple and efficient gridding method. For simplicity, the TODs are gridded onto 2D flat-sky maps, with each map having an area of \(4.3\times 4.3\) square degrees and a resolution of \(128\times 128\). Subsequently, the resulting reconstructed maps also possess the same resolution. We randomly select 240 observations from different position and system temperature as our data set, each consisting of 20 frequency channels and 5 different realizations. Thus, there are totally \(240\times 20\times 5=24000\) pairs of samples, each sample consisting of a pair of the true sky map and a resulting gridding map from TODs, specifically, represented as \(\big{(}\big{[}x^{i}=x^{i}_{\text{true}},c^{i}=y^{i}_{\text{grid}}\big{]}\big{)}\) for the \(i\)-th sample. For the purpose of training the cINN, 20,000 samples are utilized, with 2,000 samples being reserved for validation and an additional 2,000 for testing. The training of the cINN model is performed on a GPU server. ## 4 Results and Discussion ### Evaluation metrics In order to determine the performance of the cINN model in map reconstruction, it is necessary to compare the reconstructed map \(x_{\text{rec}}\) with the actual map \(x_{\text{true}}\) using suitable metrics. In the following, we shall introduce several such metrics. One such metric that is commonly used is the mean square error (MSE), as defined by \[\mathrm{MSE}\big{(}x_{\text{true}},x_{\text{rec}}\big{)}=\frac{1}{N}\sum_{k=1 }^{N}\big{(}x^{k}_{\text{true}}-x^{k}_{\text{rec}}\big{)}^{2}\,, \tag{12}\] which is calculated by averaging over all pixels of the maps and provides a direct measurement for the mean of the squares of reconstruction error. Moreover, the Peak Signal-to-Noise Ratio (PSNR) is adopted as a means of evaluating the reconstruction quality Hore & Ziou (2010). This metric, which is expressed as a log-scaled MSE, can be represented as follows: \[\mathrm{PSNR}(x_{\text{true}},x_{\text{rec}})=10\log_{10}\left(\frac{L^{2}}{ \text{MSE}}\right)\,. \tag{13}\] where \(L\) is a scalar chosen to reflect the dynamic range of the ground truth map. In this study, \(L\) is defined as the difference between the maximum and minimum values in the true map, \(L=|x_{\max}-x_{\min}|\). Essentially, a higher PSNR value is indicative of improved reconstruction accuracy. Additionally, the structural similarity index measure (SSIM) (Wang et al. 2004) is used to evaluate the overall structural similarity between the true and reconstructed maps. It aligns with human visual perception of similarity and the values are in the range of \([0,1]\), with higher values indicating better performance. The value of SSIM is calculated through \[\mathrm{SSIM}(x_{\mathrm{true}},x_{\mathrm{rec}})=\frac{(2\mu_{i}\mu_{j}+C_{1} )(2\Sigma_{ij}+C_{2})}{(\mu_{i}^{2}+\mu_{j}^{2}+C_{1})(\sigma_{i}^{2}+\sigma_{ j}^{2}+C_{2})}\,, \tag{14}\] where \(\mu\) and \(\sigma\) represent the mean and variance of a map, respectively, with \(i\) and \(j\) denoting the true and reconstructed map. \(\Sigma_{ij}\) refers to the covariance between the two maps. The positive constants \(C_{1}=(k_{1}L)^{2}\) and \(C_{2}=(k_{2}L)^{2}\) are included to prevent a null denominator and to stabilize the calculations, with values of \(k_{1}=0.01\), \(k_{2}=0.03\). ### Results of map reconstruction The main advantage of the cINN framework lies in its ability to efficiently estimate the full posterior of the reconstruction on a pixel-by-pixel basis, enabling effectively capture the underlying probabilistic relationships between the observed data and the reconstructed map. The large number of reconstructed maps helps to account for the inherent uncertainty and variability in the data, yielding a more robust and accurate representation of the posterior distribution. Based on our tests, we have found that generating 200 maps via sampling \(z\) only takes less than 1 second using a typical graphics card. This is a remarkable speed, considering that it involves the estimation of a total of \(16,384\) posteriors for each map, given the map resolution of \(128\times 128\). In Fig. 6, the changes in the loss function over training steps are presented. It is well-known that using a Figure 6: Loss function of map reconstruction as a function of training steps. We stop training at step 15000, where the validation loss reaches its minimum. rate may hinder the search for a solution. Conversely, a high learning rate can prevent finding a solution. Consequently, a learning rate of 0.001 is selected for map reconstruction in this study. We have used the initialization technique mentioned in Ardizzone et al. (2019), where Xavier initialization (Glorot & Bengio, 2010) is used and the parameter values in the last convolutional layer of sub-networks \(s\) and \(t\) are set to zero. However, there is still a certain probability that the training is unstable. In our experiment, we encountered a situation where the loss quickly diverged at the beginning. We selected different random seeds and recorded the random seed that would not diverge at the beginning as the initial value. In this way, the loss will have the same downward trend as shown in Fig 6. As observed, the training loss (orange curve) and validation loss (blue curve) are both minimized when using this learning rate. However, after step 15,000, the validation loss began to increase, even though the training loss continued to decrease. Continuing to train the cINN model, even if the training loss continues to decrease, may result in a significant rise in validation loss. Therefore, we stop training at this step, where the validation loss reaches its minimum. To further investigate the effects of underfitting and overfitting on the map reconstruction, we have chosen two checkpoints above and two below the currently selected one, say steps 4,000 and 20,000. Our findings show that for the underfitting case, MSE is about \((3.9\pm 19.3)\times 10^{-4}\)\(\mathrm{K}^{2}\), SSIM is \(0.92\pm 0.004\), and PSNR is \(22.5\pm 2.4\) for the test samples. For the overfitting case, MSE is \((4.2\pm 4.4)\times 10^{-4}\)\(\mathrm{K}^{2}\), SSIM is \(0.96\pm 0.004\), and PSNR is \(24.4\pm 6.2\). Both overfitting and underfitting result in significantly high MSE values when compared with the MSE values listed in Tab. 1. In particular, the underfitting also dramatically increases the statistical uncertainty in the MSE values. Furthermore, neither case results in a substantial improvement in the SSIM and PSNR values. Consequently, the MSE metric seems to be more sensitive to the quality of the reconstruction, and the optimal results are achieved by the currently selected point. Additionally, we have to mention that the checkpoints located near the bottom of the loss require careful selection and evaluation, based on the trends observed in the training and validation loss. As shown in Fig. 7, each row of panels from left to right represents the original, predicted, standard-deviation, and residual maps, respectively. Here, for a given observation (\(y_{\mathrm{grid}}\)), the trained cINN model obtains the posterior distribution \(p(x|y)\) for each pixel by generating 200 reconstructed maps through drawing latent variables \(z\) from a prescribed normal distribution. Thus, the predicted map and the standard-deviation map are estimated from the mean and associated \(1\sigma\) error of posteriors, indicating the average and the uncertainty in the reconstruction. Moreover, the residual maps demonstrate the deviation level between the mean estimate and the truth in the reconstruction process. As seen, the cINN reconstruction appears to be of good quality, as evidenced by the standard deviation and residuals, which are typically around 0.01 K, the same level as about 1% of the true map. The values of the three metrics, MSR, SSIM and PSNR, for 20 frequency bins are presented in Fig. 8, where the mean and 2\(\sigma\) uncertainty are estimated from the entire set of test samples. We observe the values of SSIM consistently close to 0.968 across all frequencies, with little variations of approximately 0.001 (\(2\sigma\) uncertainty). This suggests that, 1) the structural similarity remains relatively stable and does not significantly change as the frequency varies; 2) the reconstructed maps closely resemble the true ones in terms of cate that the reconstructed maps exhibit a relatively low MSE, with a range of 17 to 35 dB. In comparison with the typical temperature of 1 K for true maps, the MSE values range from approximately \(1\times 10^{-4}\) to \(7\times 10^{-4}\)\(\mathrm{K^{2}}\) across all frequencies, which also demonstrates a high quality in map reconstruction. The results of these metrics averaged over all frequencies and test samples, reported in Tab. 1. Specifically, the mean MSE value of \(2.29\times 10^{-4}\)\(\mathrm{K^{2}}\) indicates that the reconstructed maps have a low MSE value. The MSE values of \(2.29\times 10^{-4}\)\(\mathrm{K^{2}}\) are consistent with the MSE values of \(2.29\times 10^{-4}\)\(\mathrm{K^{2}}\), which is consistent with the MSE values of \(2.29\times 10^{-4}\)\(\mathrm{K^{2}}\). Figure 8: Reconstruction accuracy for different frequency bins measured using three different ways. The mean (solid black) and associated \(2\sigma\) uncertainty (gray shaded) are estimated from the entire set of test samples. SSIM (left) measures the overall structural difference between the two maps, better capturing the human perceptual understanding of the difference between two maps. PSNR (middle) and MSE (right) evaluate the reconstruction quality using the signal-to-noise ratio and the absolute difference in image pixels, respectively. Note that for SSIM and PSNR, larger values correspond to a better image reconstruction, while for MSE, the opposite is true. Figure 7: Comparison of the predicted map from the trained cINN model with the original ground-truth map, in units of K, where 3 examples of randomly selected observation samples \(y_{\mathrm{grid}}\) (from top to bottom) are used as conditioning \(c\) inputs for cINN. To demonstrate the reconstruction quality, from left to right, we show the true and predicted maps, the standard-deviation map which represents the uncertainty of the predicted map, and the residual map which shows the deviation between the true map and the mean of predicted maps. indicates a high level of structural similarity. Finally, the mean PSNR value of 26.13 dB indicates that the reconstructed maps do not deviate significantly from the true maps in terms of image details. Overall, these metrics suggest that the reconstructed maps highly agree with the true maps. ### Posterior distributions for the map reconstruction In order to determine the accuracy of the predicted posterior distributions for the map reconstruction at each pixel, we calculate the median calibration error \(e_{\rm cal}^{\rm med}\) given in (Ksoll et al., 2020) and (Ardizzone et al., 2019). The correct shape of the posterior distribution is reflected by the calibration error, which makes it a significant evaluation metric for the network. For a given confidence interval \(q\), the calibration error is computed over a set of \(N\) observations as \[e_{\rm cal}=q_{\rm in}-q\,. \tag{15}\] Here \(q_{\rm in}=N_{\rm in}/N\), the fraction of observations that fall within the \(q\)-confidence interval of the corresponding predicted posterior distribution by cINN. A negative value of \(e_{\rm cal}\) indicates that the model is overconfident, meaning that it predicts posterior distributions that are too narrow. Conversely, a positive value of \(e_{\rm cal}\) suggests that the model is under-confident, implying that predicts posterior distributions that are too broad. We compute \(e_{\rm cal}^{\rm med}\) as the median of the absolute values of \(e_{\rm cal}\) across the confidence range of 0 to 1, in steps of 0.01. In addition, the other quantity for evaluation is a median uncertainty interval at a 68% confidence level, \(u_{68}\), corresponding to the \(\pm 1\sigma\) width of the posterior distribution for the given confidence interval, where we determine the median value over the entire test set. Using the metrics of calibration error and the median uncertainty interval at 68% confidence, the results are presented in Tab. 2. One can find that the median calibration error falls within the range of about 0-6%, indicating that the model has relatively high accuracy. In terms of \(u_{68}\), our cINN model yields a typical error value of 0.03. Thus, This value is comparable to \(\sqrt{\rm MSE}\) (\(\sim 0.01\) K), implying a relatively considerable degree of uncertainty in the parameter being estimated. Despite this broadness, the typical error value is still considered remarkably low and acceptable. Given the large number of parameters involved, namely \(128\times 128\) for each frequency map, the achievement of this level of performance is particularly remarkable. Thus, we conclude that the performance of our cINN model is sufficient and meets the requirements for our intend application. Fig. 9 displays the reconstruction results for randomly selected rows of reconstructed maps. The mean values (black dotted) and 95% confidence intervals (gray shaded) for individual pixels are obtained by applying the trained cINN, which transforms \(p(z)\) to the posterior distribution \(p(x|y)\) through the backward \begin{table} \begin{tabular}{c c c c} \hline \hline Performance & MSE (\(\times 10^{-4}\) K\({}^{2}\)) & SSIM & PSNR \\ \hline \multicolumn{3}{c}{\(2.29\pm 2.14\)} & \(0.968\pm 0.002\) & \(26.13\pm 5.22\) \\ \hline \hline \end{tabular} \end{table} Table 1: Map reconstruction performance for the cINN models we have trained. The average values of MSE, SSIM and PSNR and associated 1\(\sigma\) statistical errors are shown, respectively, calculated across all frequencies and the entire test samples. distribution. As seen, the results show that the predicted mean values are all within the 95% confidence level when compared with the true values (red solid) across all 128 pixels. Furthermore, the \(1\sigma\) of these predictions is approximately 0.01 K, indicating a low variance in the reconstructed maps. The deviations between the reconstructed maps and the true values are also typically of 0.02 K or less, which suggests a good agreement between the inputs and the reconstructed maps. ### Performance against noise level To thoroughly evaluate the performance of cINN, an investigation is further conducted to determine its ability to produce high-quality reconstructions in the presence of increasing levels of noise present in the input TODs. To do so, in the new test set, we simulate new TODs with a maximum \(T_{\rm sys}\) that has been increased from 0 to 160 K. It is important to note that, we did not expand the training samples, instead relying solely on the pre-existing network that was trained with a maximum temperature of \(T_{\rm sys}=25\) K. The quality of the reconstructions is evaluated using the PSNR, MSE, and SSIM metrics, as illustrated in Fig. 10. As the noise level increases, the values of SSIM show a slight decrease from 0.89 to 0.85 for \(T_{\rm sys}\) ranging from 0-160 K, whereas the MSE and PSNR remain roughly unchanged with increasing noise levels. These observations suggest that the performance of cINN is not significantly affected by white noise levels and it has learned the statistical characteristics of the noise during training, demonstrating a strong generalization capability. Our findings support that our cINN model is robust against noise. ### Time consumption We conducted our model training on an NVIDIA Tesla P40 GPU. We have found that the training time of our cINN model is primarily dependent on the number of iterations rather than the number of epochs. When training our network for \(10^{4}\) iterations with a batch size of 32, the training process took approximately 1.2 hours, and the GPU memory usage was 2595 MB. For each training run, we executed about \(10^{4}\) iterations \begin{table} \begin{tabular}{c c c} \hline \hline pixel index & \(e_{\rm cal}^{\rm med}\) & \(u_{68}\) \\ \hline \((32,32)\) & 0.057 & 0.029 \\ \hline \((64,32)\) & 0.041 & 0.030 \\ \hline \((96,32)\) & 0.016 & 0.032 \\ \hline \((32,64)\) & 0.046 & 0.029 \\ \hline \((64,64)\) & 0.043 & 0.030 \\ \hline \((96,64)\) & 0.012 & 0.033 \\ \hline \((32,96)\) & 0.054 & 0.029 \\ \hline \((64,96)\) & 0.025 & 0.031 \\ \hline \((96,96)\) & 0.001 & 0.033 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance of our trained cINN model on reconstruction at 9 randomly selected pixels. The results are presented in terms of the calibration error \(e_{\rm cal}\) and median uncertainty at 68% confidence level \(u_{68}\) (i.e. the width of a 68% confidence interval). ## 5 Conclusion In radio observations, the map-making problem-how to reconstruct a plausible sky map from TODs and estimate the signal uncertainty on each pixel-has always been intractable and non-trivial. Unlike the tradi Figure 10: Performance of the cINN reconstruction against white noise, using the SSIM, PSNR, and MSE metrics (from left to right). The calculations are performed on a randomly selected gridding map of TODs, which are centered at the RA and Dec coordinates of \(160^{\circ}\) and \(26^{\circ}\), respectively, at a frequency of 1100 MHz. The mean values (black-solid) and the 95% C.L. (gray shaded) are estimated by backward mapping of cINN from 200 \(z\) realizations. Figure 9: Comparison of the reconstructed maps for randomly selected rows and the true ones. Mean values (black dotted) and 95% confidence intervals (gray shaded) for individual pixels are obtained using a trained cINN based on 200 realizations of the latent variables \(z\). The predicted mean values are within the 95% C.L. of the true values (red solid) for all 128 pixels of each map. (cINN). One of the main advantages of our method is that it avoids solving for the ill-posed inverse problem. Moreover, once the network is trained, the reconstruction of the sky map can be performed very fast. The use of forward modeling allows for the effortless mapping of the true sky map to TODs, which can incorporate all observational effects, systematics, and data processing. These simulated true sky maps and their associated TODs are then used as a training set and fed into a neural network to train a cINN. Our cINN model transforms true maps into a latent space and learns the inverse mapping, both of which are conditioned on observations. This joint modeling of the distribution of all pixels provides a comprehensive understanding of the relationship between the true maps and the observations. The trained cINN can then not only reconstruct the true sky map based on a given TOD, but also provide an pixel-by-pixel estimate of the uncertainty. In order to show the performance of the network, we have performed a simulation of drift-scan observations based on the FAST configuration, which includes 19 beams and covers a frequency range of 1100-1120 MHz. The goal of this study is to initially validate our approach, so for simplicity, we only include white noise and a Gaussian beam response in the simulated TODs, while ignoring other non-ideal effects such as \(1/f\) noise and RFIs. Our method is validated by the test results, which demonstrate high reconstruction accuracy and good agreement between the reconstructed sky maps and the true maps. The test dataset achieves a MSE of \((2.29\pm 2.14)\times 10^{-4}\)\(\mathrm{K^{2}}\), a SSIM of \(0.968\pm 0.002\), and a PSNR of \(26.13\pm 5.22\) at the \(1\sigma\) level. Furthermore, we observe a slight decrease in the SSIM values as the noise level for \(T_{\mathrm{sys}}\) increases from 0 to 160 K, ranging from 0.89 to 0.85. However, the MSE and PSNR values remain relatively stable with increasing noise levels. We have evaluated how underfitting and overfitting affect map reconstruction by comparing checkpoint results above and below our chosen optimal point. Our findings indicate that both cases result in higher MSE values compared with the current point, with underfitting leading to a large uncertainty. Therefore, our current result is optimal. In addition, SSIM and PSNR values do not show any significant deviations from the optimal one, and then MSE appears to be the most sensitive metric in the map reconstruction. As future work, we aim to validate the cINN approach by incorporating non-ideal observational effects that more accurately reflect real-world scenarios. Furthermore, this framework has the potential to be applied to radio interferometric observations, where imaging can be particularly challenging due to sparse \(uv\) coverage. ###### Acknowledgements. This work is supported by the National Key R&D Program of China (2018YFA0404502, 2018YFA0404504, 2018YFA0404601, 2020YFC2201600), the Ministry of Science and Technology of China (2020SKA0110402, 2020SKA0110401, 2020SKA0110100), National Science Foundation of China (11890691, 11621303, 11653003, 12205388, 11633004, 11821303), the China Manned Space Project with No. CMS-CSST-2021 (A02, A03, B01), the Major Key Project of PCL, the 111 project No. B20019, and the CAS Interdisciplinary Innovation Team (JCTD-2019-05), the MOST inter-government cooperation program China-South Africa Cooperation Flagship project (grant No. 2018YFE0120800), the Chinese Academy of Sciences (CAS) Frontier Science Key Project (grant No.
2306.08109
Provable Accelerated Convergence of Nesterov's Momentum for Deep ReLU Neural Networks
Current state-of-the-art analyses on the convergence of gradient descent for training neural networks focus on characterizing properties of the loss landscape, such as the Polyak-Lojaciewicz (PL) condition and the restricted strong convexity. While gradient descent converges linearly under such conditions, it remains an open question whether Nesterov's momentum enjoys accelerated convergence under similar settings and assumptions. In this work, we consider a new class of objective functions, where only a subset of the parameters satisfies strong convexity, and show Nesterov's momentum achieves acceleration in theory for this objective class. We provide two realizations of the problem class, one of which is deep ReLU networks, which --to the best of our knowledge--constitutes this work the first that proves accelerated convergence rate for non-trivial neural network architectures.
Fangshuo Liao, Anastasios Kyrillidis
2023-06-13T19:55:46Z
http://arxiv.org/abs/2306.08109v2
Accelerated Convergence of Nesterov's Momentum for Deep Neural Networks under Partial Strong Convexity ###### Abstract Current state-of-the-art analyses on the convergence of gradient descent for training neural networks focus on characterizing properties of the loss landscape, such as the Polyak-Lojaciewicz (PL) condition and the restricted strong convexity. While gradient descent converges linearly under such conditions, it remains an open question whether Nesterov's momentum enjoys accelerated convergence under similar settings and assumptions. In this work, we consider a new class of objective functions, where only a subset of the parameters satisfies strong convexity, and show Nesterov's momentum achieves acceleration in theory for this objective class. We provide two realizations of the problem class, one of which is deep ReLU networks, which -to the best of our knowledge-constitutes this work the first that proves accelerated convergence rate for non-trivial neural network architectures. ## 1 Introduction **Theory of gradient descent in neural networks.** Training neural networks with gradient-based methods has shown surprising empirical success (Lecun et al., 1998; LeCun et al., 2015; Zhang et al., 2017; Goodfellow et al., 2016); yet, it is not lucid why such a simple algorithm works (Zhang et al., 2018; Li et al., 2020). Overall, it has been a mystery why such routines designed originally for convex problems can consistently find a good minimum for complicated non-convex objectives, as that of neural network training (Yun et al., 2019; Auer et al., 1995; Safran and Shamir, 2018). Focusing on the case of neural networks, a major advance in understanding is the analysis through the so-called Neural Tangent Kernel (NTK) (Jacot et al., 2020). The use of NTK shows that, when the width of the neural network approaches infinity, the training process can be treated as a kernel machine. Inspired by the NTK analysis, a large body of work has focused on showing the convergence of gradient descent for various neural network architectures under finite over-parameterization requirements (Du et al., 2019, 2019; Allen-Zhu et al., 2019; Zou and Gu, 2019; Zhang et al., 2019; Awasthi et al., 2021; Ling et al., 2023; Allen-Zhu et al., 2019; Song and Yang, 2020; Su and Yang, 2019). In the meantime, the analysis was also extended to other training algorithms and settings that differ from gradient descent, such as stochastic gradient descent (Oymak and Soltanolkotabi, 2019; Ji and Telgarsky, 2020; Xu and Zhu, 2021; Zou et al., 2018), drop-out (Liao and Kyrillidis, 2022; Mianjy and Arora, 2020), federated training (Huang et al., 2021), and adversarial training (Li et al., 2022). A recent line of work follows a different route and characterizes the loss landscape of neural networks using the local Polyak-Lojaciewicz (PL) condition (Song et al., 2021; Liu et al., 2020; Nguyen, 2021; Ling et al., 2023). Building upon the well-established theory of how gradient descent converges under the PL condition (Karimi et al., 2020), this line of work decouples the neural network structure from the dynamics of loss function, along the optimization trajectory. This way, these works were able to show a more fine-grained analysis of the relationship between regulatory conditions, such as the PL condition, and the neural network structure. Such analysis not only resulted in further relaxed over-parameterization requirements (Song et al., 2021; Nguyen, 2021; Liu et al., 2020), but was also shown to be easily extended to deep architectures (Ling et al., 2023), suggesting that it is more suitable in practice. **Theory of accelerated gradient descent in neural networks.** In contrast to the fast-growing research devoted to vanilla gradient descent, there is limited work on the behavior of momentum methods in training neural networks. In particular, the acceleration of both the Heavy Ball method and Nesterov's momentum is shown only for shallow ReLU neural networks (Wang et al., 2021; Liu et al., 2022; Zhang et al., 2022) and deep linear neural networks (Wang et al., 2021; Liu et al., 2022; Wang et al., 2022). For these architectures, (Wang et al., 2022) shows that training reduces to a case very similar to optimizing a quadratic objective. Thus, it is not clear whether these analyses can be extended to neural networks with more layers, or more complicated structure. In addition, showing an accelerated convergence rate under only the PL condition has been a long-standing difficulty. For the Heavy Ball method, Danilova et al. (Danilova et al., 2018) established a linear convergence rate under the PL condition, but no acceleration is shown without assuming strong convexity. Wang et al. (Wang et al., 2022) proved an accelerated convergence rate; yet, the authors assume the \(\lambda^{\star}\)-average out condition, which cannot always be easily justified for neural networks. To the best of our knowledge, any proof of convergence for Nesterov's momentum under the PL condition in non-convex settings is currently missing. In the continuous limit, acceleration is proved in a limited scenario (Apidopoulos et al., 2022), and the analysis does not easily extend to the discrete case (Wang et al., 2022; Shi et al., 2022). Finally, Yue et al. (Yue et al., 2022) shows that gradient descent already achieves an optimal convergence rate for functions satisfying smoothness and the PL condition. This suggests that, to prove the acceleration of momentum methods on a wider class of neural networks, we need to leverage properties beyond the PL condition. Based on prior work (Liu et al., 2020) that shows over-parameterized systems are essentially non-convex in any neighborhood of the global minimum, we aim at developing a relaxation to the (strong) convexity that is suitable for such systems and enables the momentum methods to achieve acceleration. In particular, we consider the minimization of a new class of objectives: \[\min_{\mathbf{x}\in\mathbb{R}^{d_{1}},\mathbf{u}\in\mathbb{R}^{d_{2}}}f( \mathbf{x},\mathbf{u}), \tag{1}\] where \(f\) satisfies the strong convexity with respect to \(\mathbf{x}\), among other assumptions (more in text below). Intuitively, our construction assumes that the parameter space can be partitioned into two sets, and only one of the two sets enjoys rich properties, such as strong convexity. In this paper, we focus on Nesterov's momentum. Our contribution can be summarized as: * Focusing on the general problem class in (1), we prove that Nesterov's momentum enjoys an accelerated linear convergence where the convergence rate is \(1-\Theta\left(\nicefrac{{1}}{{\sqrt{\kappa}}}\right)\), compared to \(1-\Theta\left(\nicefrac{{1}}{{\kappa}}\right)\) in the case of gradient descent. Here, \(\kappa\) is a variant of the condition number of \(f\). Our result holds even when \(f\) is non-convex and non-smooth. * We provide two realizations of our problem class. In Section 5.1, we first consider fitting an additive model under the MSE loss. We prove the acceleration of Nesterov's momentum as long as the non-convex component of the additive model is small enough to control the upper bounds of Lipschitz-type of assumptions. * Next, we turn to the training of a deep ReLU neural network in Section 5.2. We show that when the width of the neural network trained with \(n\) samples is \(\Omega\left(n^{4}\right)\), under proper initialization, Nesterov's momentum converges to zero training loss with rate \(1-\Theta\left(\nicefrac{{1}}{{\sqrt{\kappa}}}\right)\). To the best of our knowledge, this is the first result that establishes accelerated convergence of deep ReLU networks. ## 2 Related Works **Convergence in neural network training.** Recent studies on the convergence guarantees of gradient-based methods in neural network training mainly focus on two techniques: the Neural Tangent Kernel (Jacot et al., 2020) and the mean-field analysis (Mei et al., 2019). The NTK-based analysis builds upon the idea that, when the width approaches infinity, training neural networks behaves like training a kernel machine. Various techniques have been developed to control the error that occurs when the width becomes finite. In particular, (Du et al., 2019) tracks the change of activation patterns in ReLU-based neural networks, and often requires a large over-parameterization. Later works improve the over-parameterization requirement by leveraging matrix concentration inequalities (Song and Yang, 2020) for a more fine-grained analysis on the change of Jacobians (Oymak and Soltanolkotabi, 2019), and building their analysis upon the separability assumption of the data in the reproducing Hilbert space of the neural network (Ji and Telgarsky, 2020). A noticeable line of work focuses on establishing that the PL condition is satisfied by neural networks, where the coefficient of the PL condition is based on the eigenvalue of the NTK matrix. (Nguyen, 2021) shows the PL condition is satisfied by deep ReLU neural networks, by considering the dominance of the gradient with respect to the weight in the last layer. (Liu et al., 2020) proves the PL condition by upper bounding the Hessian for deep neural networks with smooth activation functions. (Song et al., 2021) further reduces the over-parameterization, while maintaining the PL condition, via the expansion of the activation function with the Hermite polynomials. Lastly, (Banerjee et al., 2023) establishes the restricted strong convexity of neural networks within a sequence of ball-shaped regions, centered around the weights per iteration; yet, the coefficient of the strong convexity is not explicitly characterized in theory. **Convergence of Nesterov's Momentum.** The original proof of Nesterov's momentum (Nesterov, 2018) builds upon the idea of estimating sequences for both convex smooth objectives and strongly convex smooth objectives. Later work in (Bansal and Gupta, 2019) provides an alternate proof within the same setting by constructing a Lyapunov function. In the non-convex setting, a large body of works focuses on variants of Nesterov's momentum that lead to a guaranteed convergence, by employing techniques such as negative curvature exploitation (Carmon et al., 2017), cubic regularization (Carmon and Duchi, 2020), and restarting schemes (Li and Lin, 2022). For neural networks, (Liu et al., 2022, 2022) are the only works that study the convergence of Nesterov's momentum. However, with the over-parameterization requirement considered, the objective is similar to a quadratic function. Deviating from Nesterov's momentum, (Wang et al., 2021) studies the convergence of the Heavy-ball method under similar over-parameterization requirements. A recent work (Wu et al., 2023) proves the convergence of the Heavy-ball method under the mean-field limit; such a limit is not the focus of our study in this paper. Lastly, (Jelassi and Li, 2022) shows that momentum-based methods improve the generalization ability of neural networks. However, there is no explicit convergence guarantee for the training loss. ## 3 Preliminary **Notations.** We use bold lower-case letters (e.g. \(\mathbf{a}\)) to denote vectors, bold upper-case letters (e.g. \(\mathbf{A}\)) to denote matrices and standard lower-case letters (e.g. \(a\)) to denote scalars. For a vector \(\mathbf{a}\), we use \(a_{i}\) to denote its \(i\)-th entry. For a matrix \(\mathbf{A}\), we use \(a_{ij}\) to denote its \((i,j)\)-th entry. \(\left\|\mathbf{a}\right\|_{2}\) denotes the \(\ell_{2}\)-norm of vector \(\mathbf{a}\), and \(\left\|\mathbf{A}\right\|_{F}\) denotes the Frobenius norm of \(\mathbf{A}\). For two vectors \(\mathbf{a}_{1},\mathbf{a}_{2}\), we use \((\mathbf{a}_{1},\mathbf{a}_{2})\) to denote the concatenation of \(\mathbf{a}_{1}\) and \(\mathbf{a}_{2}\). For a matrix \(\mathbf{A}\) with columns \(\mathbf{a}_{1},\ldots,\mathbf{a}_{n}\), we use \(\forall(\mathbf{A})=(\mathbf{a}_{1},\ldots,\mathbf{a}_{n})\) to denote the vectorized form of \(\mathbf{A}\). ### Problem Setup and Assumptions Optimization literature often focuses on the constraint-free minimization of a function \(\hat{f}:\mathbb{R}^{d}\rightarrow\mathbb{R}\). In this paper, we reformulate this problem using the following definition. **Definition 1**.: _(Partitioned Equivalence) A function \(f:\mathbb{R}^{d_{1}}\times\mathbb{R}^{d_{2}}\rightarrow\mathbb{R}\) is called a partitioned equivalence of \(\hat{f}:\mathbb{R}^{d}\rightarrow\mathbb{R}\), if \(i)\)\(d_{1}+d_{2}=d\), and \(ii)\) there exists a permutation function \(\pi:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) over the parameters of \(\hat{f}\), such that \(\hat{f}(\mathbf{w})=f(\mathbf{x},\mathbf{u})\) if and only if \(\pi(\mathbf{w})=(\mathbf{x},\mathbf{u})\). We say that \((\mathbf{x},\mathbf{u})\) is a partition of \(\mathbf{w}\)._ Despite the difference in the representation of their parameters, \(\hat{f}\) and \(f\) share the same properties, and any algorithm for \(\hat{f}\) would produce the same result for \(f\). Therefore, we turn our focus from the minimization problem of \(\hat{f}\) to the minimization problem in Equation (1). We further assume that \(f\) is a composition of a loss function \(g:\mathbb{R}^{\hat{d}}\rightarrow\mathbb{R}\) and a possibly non-smooth and non-convex model function \(h:\mathbb{R}^{d_{1}}\times\mathbb{R}^{d_{2}}\rightarrow\mathbb{R}^{\hat{d}}\), for some dimension \(\hat{d}\in\mathbb{Z}_{+}\); i.e., \(f(\mathbf{x},\mathbf{u})=g(h(\mathbf{x},\mathbf{u}))\). We obey the following notation with respect to gradients of \(f\): \[\nabla_{1}f(\mathbf{x},\mathbf{u})=\frac{\partial f(\mathbf{x},\mathbf{u})}{ \partial\mathbf{x}};\quad\nabla_{2}f(\mathbf{x},\mathbf{u})=\frac{\partial f (\mathbf{x},\mathbf{u})}{\partial\mathbf{u}};\;\nabla f(\mathbf{x},\mathbf{u })=\left(\nabla_{1}f(\mathbf{x},\mathbf{u}),\nabla_{2}f(\mathbf{x},\mathbf{u })\right). \tag{2}\] We consider Nesterov's momentum with constant step size \(\eta\) and momentum parameter \(\beta\): \[(\mathbf{x}_{k+1},\mathbf{u}_{k+1}) =(\mathbf{y}_{k},\mathbf{v}_{k})-\eta\nabla f(\mathbf{y}_{k}, \mathbf{u}_{k}) \tag{3}\] \[(\mathbf{y}_{k+1},\mathbf{v}_{k+1}) =(\mathbf{x}_{k+1},\mathbf{u}_{k+1})+\beta\left((\mathbf{x}_{k+1},\mathbf{u}_{k+1})-(\mathbf{x}_{k},\mathbf{u}_{k})\right)\] with \(\mathbf{y}_{0}=\mathbf{x}_{0}\) and \(\mathbf{v}_{0}=\mathbf{u}_{0}\). Let \(\mathcal{B}^{(1)}_{R_{\mathbf{x}}}\subseteq\mathbb{R}^{d_{1}}\) and \(\mathcal{B}^{(2)}_{R_{\mathbf{u}}}\subseteq\mathbb{R}^{d_{2}}\) denote the balls centered as \(\mathbf{x}_{0}\): \[\mathcal{B}^{(1)}_{R_{\mathbf{x}}}=\{\mathbf{x}\in\mathbb{R}^{d_{1}}:\left\| \mathbf{x}-\mathbf{x}_{0}\right\|_{2}\leq R_{\mathbf{x}}\};\quad\mathcal{B}^{ (2)}_{R_{\mathbf{u}}}=\{\mathbf{u}\in\mathbb{R}^{d_{2}}:\left\|\mathbf{u}- \mathbf{u}_{0}\right\|_{2}\leq R_{\mathbf{u}}\}\] With the definition of \(\mathcal{B}^{(1)}_{R_{\mathbf{x}}}\) and \(\mathcal{B}^{(2)}_{R_{\mathbf{u}}}\), We make the following assumptions: **Assumption 1**.: \(f\) _is \(\mu\)-strongly convex with \(\mu>0\) with respect to the first part of its parameters:_ \[f(\mathbf{y},\mathbf{u})\geq f(\mathbf{x},\mathbf{u})+\left\langle\nabla_{1}f( \mathbf{x},\mathbf{u}),\mathbf{y}-\mathbf{x}\right\rangle+\frac{\mu}{2}\left\| \mathbf{y}-\mathbf{x}\right\|_{2}^{2},\quad\forall\mathbf{x},\mathbf{y}\in \mathbb{R}^{d_{1}};\;\mathbf{u}\in\mathcal{B}^{(2)}_{R_{\mathbf{u}}}.\] **Assumption 2**.: \(f\) _is \(L_{1}\)-smooth with respect to the first part of its parameters:_ \[f(\mathbf{y},\mathbf{u})\leq f(\mathbf{x},\mathbf{u})+\left\langle\nabla_{1}f( \mathbf{x},\mathbf{u}),\mathbf{y}-\mathbf{x}\right\rangle+\frac{L_{1}}{2} \left\|\mathbf{y}-\mathbf{x}\right\|_{2}^{2},\quad\forall\mathbf{x},\mathbf{y }\in\mathbb{R}^{d_{1}};\;\mathbf{u}\in\mathcal{B}^{(2)}_{R_{\mathbf{u}}}.\] Based on Assumption 1 and 2, we define the condition number of \(f\). **Definition 2**.: _(Condition Number) The condition number \(\kappa\) of \(f\) is given by \(\kappa=\nicefrac{{L_{1}}}{{\mu}}\)._ **Assumption 3**.: \(g\) _satisfies \(\min_{\mathbf{s}\in\mathbb{R}^{d}}g(\mathbf{s})=\min_{\mathbf{x}\in\mathbb{R}^{ d_{1}},\mathbf{u}\in\mathbb{R}^{d_{2}}}f(\mathbf{x},\mathbf{u})\), and is \(L_{2}\)-smooth:_ \[g(\mathbf{s}_{1})\leq g(\mathbf{s}_{2})+\langle\nabla g(\mathbf{s}_{1}), \mathbf{s}_{2}-\mathbf{s}_{1}\rangle+\frac{L_{2}}{2}\left\|\mathbf{s}_{2}- \mathbf{s}_{1}\right\|_{2}^{2},\quad\forall\mathbf{s}_{1},\mathbf{s}_{2}\in \mathbb{R}^{\hat{d}}.\] Assumptions 1 and 2 are relaxed versions of the smoothness and strong convexity, accordingly, also made in classical optimization literature: Instead of assuming that the objective is smooth and strongly convex over the whole set of parameters, we only assume such property to hold with respect to a subset of the parameters, while the rest of the parameters are in a ball near initialization. Assumption 3 is standard in prior literature (Liu et al., 2020; Song et al., 2021) on the convergence of neural network training, and holds for a large number of loss functions such as the MSE loss and the logistic loss. **Assumption 4**.: \(h\) _satisfies \(G_{1}\)-Lipschitzness with respect to the second part of its parameters:_ \[\left\|h(\mathbf{x},\mathbf{u})-h(\mathbf{x},\mathbf{v})\right\|_{2}\leq G_{ 1}\left\|\mathbf{u}-\mathbf{v}\right\|_{2},\quad\forall\mathbf{x}\in\mathcal{ B}_{R_{\mathbf{x}}}^{(1)};\;\mathbf{u},\mathbf{v}\in\mathcal{B}_{R_{\mathbf{u}}}^{(2)}.\] **Assumption 5**.: _The gradient of \(f\) with respect to the first part of its parameter, \(\nabla_{1}f(\mathbf{x},\mathbf{u})\), satisfies \(G_{2}\)-Lipschitzness with respect to the second part of its parameters:_ \[\left\|\nabla_{1}f(\mathbf{x},\mathbf{u})-\nabla_{1}f(\mathbf{x},\mathbf{v}) \right\|_{2}\leq G_{2}\left\|\mathbf{u}-\mathbf{v}\right\|_{2},\quad\forall \mathbf{x}\in\mathcal{B}_{R_{\mathbf{x}}}^{(1)};\;\mathbf{u},\mathbf{v}\in \mathcal{B}_{R_{\mathbf{u}}}^{(2)}.\] **Assumption 6**.: _All the minimum values of \(f\), when restricted to the optimization over \(\mathbf{x}\), equal to the global minimum value:_ \[\min_{\mathbf{x}\in\mathbb{R}^{d_{1}}}f(\mathbf{x},\mathbf{u})=f^{\star}:=\min _{\mathbf{x}\in\mathbb{R}^{d_{1}},\mathbf{u}\in\mathbb{R}^{d_{2}}}f(\mathbf{x},\mathbf{u});\quad\forall\mathbf{u}\in\mathcal{B}_{R_{\mathbf{u}}}^{(2)}.\] Notice that we do not assume \(f\) to be either convex or smooth with respect to \(\mathbf{u}\). As such, there is no reason to believe that the updates in (3) on \(\mathbf{u}\) will make positive progress towards finding the global minimum. We treat the change in the second part of the parameters as errors induced by the updates. Assumptions 4 and 5 are made such that the effect on the change of the model output \(h(\mathbf{x},\mathbf{u})\) and the gradient with respect to \(\mathbf{x}\) caused by the change of \(\mathbf{u}\) can be controlled. Moreover, without Assumption 6, it is possible that the change of \(\mathbf{u}\) will cause the optimization trajectory to get stuck at some point of \(\mathbf{u}\), such that the global minimum value cannot be achieved even when \(\mathbf{x}\) is fully optimized. As a sanity check, we can show that Assumptions 1-6 are satisfied by a smooth and strongly convex function: **Theorem 1**.: _Let \(\tilde{f}\) be \(\tilde{\mu}\)-strongly convex and \(\tilde{L}\)-smooth. Then \(\tilde{f}\) satisfies Assumptions 1-6 with:_ \[R_{\mathbf{x}}=R_{\mathbf{u}}=\infty;\;\mu=\tilde{\mu};\;L_{1}=L_{2}=\tilde{L} ;\;G_{1}=G_{2}=0.\] Theorem 1 shows that the combination of Assumptions 1-6 is no stronger than the assumption that the objective is smooth and strongly convex. Therefore, the minimization of the class of functions satisfying Assumptions 1-6 does not lead to better lower complexity bound than the class of smooth and strong convex functions. That is, the best convergence rate we can achieve is \(1-\Theta\left(\nicefrac{{1}}{{\sqrt{\kappa}}}\right)\). **Remark 1**.: _Notice that Nesterov's momentum in (3) is agnostic to the partition of the parameters, i.e., it executes exactly the same operator on \(\mathbf{x}\) and \(\mathbf{u}\). Therefore, while our result requires the existence of a parameter partition \((\mathbf{x},\mathbf{u})\) that satisfies Assumptions 1-6, it does not need to explicitly identify such a partition. Moreover, in Sections 5.1 and 5.2, we will show that Assumptions 1-6 can be satisfied by two concrete models, where training of deep ReLU networks is one of them._ Accelerated Convergence under Partial Strong Convexity ### Warmup: Convergence of Gradient Descent We first focus on the convergence of gradient descent. To start, we notice that Assumptions 1 and 6 impliy the PL condition: **Lemma 1**.: _Suppose that Assumption 1, 6 hold. Then, for all \(\mathbf{x}\in\mathbb{R}^{d}\) and \(\mathbf{u}\in\mathcal{B}^{(2)}_{R_{\mathbf{u}}}\), we have:_ \[\left\|\nabla f(\mathbf{x},\mathbf{u})\right\|_{2}^{2}\geq\left\|\nabla_{1}f( \mathbf{x},\mathbf{u})\right\|_{2}^{2}\geq 2\mu\left(f(\mathbf{x},\mathbf{u})-f^{ \star}\right).\] Recall that, due to the minimum assumption made on the relationship between \(f(\mathbf{x},\mathbf{u})\) and \(\mathbf{u}\), we treat the change of \(\mathbf{u}\) during the iterates as an error. Thus, we need the following lemma, which bounds how much \(f\) is affected by the change of \(\mathbf{u}\). **Lemma 2**.: _Suppose that Assumptions 3, 4 hold. For any \(\hat{\mathcal{Q}}>0\) and \(\mathbf{x}\in\mathcal{B}^{(1)}_{R_{\mathbf{x}}},\mathbf{u},\mathbf{v}\in \mathcal{B}^{(2)}_{R_{\mathbf{u}}}\), we have:_ \[f(\mathbf{x},\mathbf{u})-f(\mathbf{x},\mathbf{v})\leq\hat{\mathcal{Q}}^{-1}L_ {2}\left(f(\mathbf{x},\mathbf{v})-f^{\star}\right)+\frac{G_{1}^{2}}{2}\left(L _{2}+\hat{\mathcal{Q}}\right)\left\|\mathbf{u}-\mathbf{v}\right\|_{2}^{2}.\] With the help of Lemmas 1 and 2, we can show the linear convergence of gradient descent: **Theorem 2**.: _Suppose that Assumptions 1-4 and 6 hold with \(G_{1}^{4}\leq\frac{\mu^{2}}{8L_{2}^{2}}\) and_ \[R_{\mathbf{x}}\geq 16\eta\kappa\sqrt{L_{1}}\left(f(\mathbf{x}_{0},\mathbf{u}_{ 0})-f^{\star}\right)^{\frac{1}{2}};\ R_{\mathbf{u}}\geq 16\eta\kappa G_{1} \sqrt{L_{2}}\left(f(\mathbf{x}_{0},\mathbf{u}_{0})-f^{\star}\right)^{\frac{1} {2}}.\] _Then, there exists constant \(c>0\) such that the gradient descent on \(f\) given by:_ \[(\mathbf{x}_{k+1},\mathbf{u}_{k+1})=(\mathbf{x}_{k},\mathbf{u}_{k})-\eta \nabla f(\mathbf{x}_{k},\mathbf{u}_{k}),\] _with \(\eta=\nicefrac{{c}}{{L_{1}}}\), converges linearly according to:_ \[f(\mathbf{x}_{k},\mathbf{u}_{k})-f^{\star}\leq\left(1-\frac{c}{4\kappa} \right)^{k}\left(f(\mathbf{x}_{0},\mathbf{u}_{0})-f^{\star}\right).\] I.e., Theorem 2 shows that gradient descent applied to \(f\) converges linearly with a rate of \(1-\Theta(\nicefrac{{1}}{{\kappa}})\) within our settings. The proofs for Lemma 1 and 2, and Theorem 2 are deferred to Appendix B. ### Technical Difficulty We will now study the convergence property of Nesterov's momentum in equation 3 under only Assumptions 1-6. Our analysis bears two major difficulties. **Difficulty 1**.: _Most previous analyses of Nesterov's momentum use the global minimum as a reference point to construct the Lyapunov function; see Bansal and Gupta (2019), d'Aspremont et al. (2021). In the original proof of Nesterov, the construction of estimating sequence also assumes the uniqueness of global minimum Nesterov (2018). However, our objective function is non-convex and allows the existence of multiple global minima, which prevents us from directly applying the Lyapunov function or estimating sequence as in previous works._ While the non-convexity of \(f\) introduces the possibility of multiple global minima, Assumption 1 implies that, with a fixed \(\mathbf{u}\), there exists a unique \(\mathbf{x}^{\star}(\mathbf{u})\) that minimizes \(f(\mathbf{x},\mathbf{u})\). Moreover, Assumption 6 implies that, for all \(\mathbf{u}\in\mathcal{B}^{(2)}_{R_{\mathbf{u}}}\), the local minimum \((\mathbf{x}^{\star}(\mathbf{u}),\mathbf{u})\) is also a global minimum. Thus, we can resolve the difficulty of accounting for multiple global minima by tracking the change of \(\mathbf{x}^{\star}(\mathbf{u})\) with the change of \(\mathbf{u}\). The following lemma gives a characterization of this property. **Lemma 3**.: _Let \(\mathbf{x}^{\star}(\mathbf{u})=\arg\min_{\mathbf{x}\in\mathbb{R}^{d_{1}}}f( \mathbf{x},\mathbf{u})\). Suppose Assumptions 1 and 5 hold. Then, we have:_ \[\|\mathbf{x}^{\star}(\mathbf{u}_{1})-\mathbf{x}^{\star}(\mathbf{u}_{2})\|_{2} \leq\frac{G_{2}}{\mu}\left\|\mathbf{u}_{1}-\mathbf{u}_{2}\right\|_{2},\quad \forall\mathbf{u}_{1},\mathbf{u}_{2}\in\mathcal{B}_{R_{\mathbf{u}}}^{(2)}.\] Lemma 3 indicates that, if we view \(\mathbf{x}^{\star}(\mathbf{u})\) as a function of \(\mathbf{u}\), then this function is \(\frac{G_{2}}{\mu}\)-Lipschitz. For a fixed \(\mathbf{u}\), given the nice properties on \(\mathbf{x}\), the iterates of Nesterov's momentum will guide \(\mathbf{x}\) to the minimum, based on the current \(\mathbf{u}\). Lemma 3 guarantees that the progress toward the minimum induced by \(\mathbf{u}_{1}\) does not deviate much from the progress toward the minimum induced by \(\mathbf{u}_{2}\). **Difficulty 2**.: _Lemmas 2 and 3 show the importance of bounding the change of \(\mathbf{u}\) during iterates, namely \(\left\|\mathbf{u}_{k+1}-\mathbf{u}_{k}\right\|_{2}\). In previous works that focus on the convergence of neural network training, it is shown that this change of \(\mathbf{u}\) is closely related to the norm of the gradient. Under the assumption of smoothness, the norm of the gradient is bounded by a factor times the optimality gap at the current point, which is shown to enjoy a linear convergence by induction Du et al. (2019b); Nguyen (2021). However, in the case of Nesterov's momentum, we cannot directly utilize this relationship, since the gradient is evaluated at the intermediate step \((\mathbf{y}_{k},\mathbf{v}_{k})\), and, while we know that the optimality gap at \((\mathbf{x}_{k},\mathbf{u}_{k})\) converges linearly, we have very little knowledge about the optimality gap at \((\mathbf{y}_{k},\mathbf{v}_{k})\)._ To tackle this difficulty, our analysis starts with a careful bound on \(\|\mathbf{x}_{k+1}-\mathbf{x}_{k}\|_{2}^{2}\) by utilizing the convexity with respect to \(\mathbf{x}\) to characterize the inner product \(\langle\nabla_{1}f(\mathbf{y}_{k},\mathbf{v}_{k}),\mathbf{x}_{k}-\mathbf{x}_ {k-1}\rangle\). After that, we derive a bound of \(\|\nabla_{1}f(\mathbf{y}_{k},\mathbf{v}_{k})\|_{2}^{2}\) using a combination of \(\left\|\mathbf{x}_{k+1}-\mathbf{x}_{k}\right\|_{2}^{2}\) and \(\left\|\mathbf{x}_{k}-\mathbf{x}_{k-1}\right\|_{2}^{2}\). Lastly, we relate \(\|\nabla_{2}f(\mathbf{y}_{k},\mathbf{v}_{k})\|_{2}^{2}\) to \(\|\nabla_{1}f(\mathbf{y}_{k},\mathbf{v}_{k})\|_{2}^{2}\) using the following gradient dominance property. **Lemma 4**.: _Suppose that Assumptions 1, 3, 4, and 6 hold. Then, we have:_ \[\|\nabla_{2}f(\mathbf{x},\mathbf{u})\|_{2}^{2}\leq\frac{G_{1}^{2}L_{2}}{\mu} \left\|\nabla_{1}f(\mathbf{x},\mathbf{u})\right\|_{2}^{2},\quad\forall \mathbf{x}\in\mathcal{B}_{R_{\mathbf{x}}}^{(1)};\;\mathbf{u}\in\mathcal{B}_{R _{\mathbf{u}}}^{(2)}.\] Lemma 4 establishes the upper bound of \(\|\nabla_{2}f(\mathbf{x},\mathbf{u})\|_{2}^{2}\) using \(\|\nabla_{1}f(\mathbf{x},\mathbf{u})\|_{2}^{2}\). Direct application of this result contributes to the bound on \(\left\|\mathbf{u}_{k+1}-\mathbf{u}_{k}\right\|_{2}\). Intuitively, this lemma implies that the effect of gradient update on \(\mathbf{u}\) is less significant than the effect of gradient update on \(\mathbf{x}\). ### Acceleration of Nesterov's Momentum After resolving the two technical difficulties stated above, we are ready to derive our main result. We will first state the main result theorem, and provide a sketch of the proof. The detailed proof will be deferred to Appendix D. **Theorem 3**.: _Let Assumptions (1)-(6) hold. Consider Nesterov's momentum given by (3) with initialization \(\{\mathbf{x}_{0},\mathbf{u}_{0}\}=\{\mathbf{y}_{0},\mathbf{v}_{0}\}\). There exists absolute constants \(c,C_{1},C_{2}>0\), such that, if \(\mu,L_{1},L_{2},G_{1},G_{2}\) and \(R_{\mathbf{x}},R_{\mathbf{u}}\) satisfy:_ \[G_{1}^{4} \leq\frac{C_{1}\mu^{2}}{L_{2}(L_{2}+1)^{2}}\left(\frac{1-\beta}{ 1+\beta}\right)^{3};\quad G_{1}^{2}G_{2}^{2}\leq\frac{C_{2}\mu^{3}}{L_{2}(L_{ 2}+1)\sqrt{\kappa}}\left(\frac{1-\beta}{1+\beta}\right)^{2}; \tag{4}\] \[R_{\mathbf{x}} \geq\frac{36}{c}\sqrt{\kappa}\left(\frac{\eta(L_{2}+1)}{1-\beta} \right)^{\frac{1}{2}}(f(\mathbf{x}_{0},\mathbf{u}_{0})-f^{\star})^{\frac{1}{2}};\] \[R_{\mathbf{u}} \geq\frac{36}{c}\sqrt{\kappa}\left(\frac{\eta G_{1}^{2}L_{2}(L_{2 }+1)(1+\beta)^{3}}{\mu\beta(1-\beta)^{3}}\right)^{\frac{1}{2}}(f(\mathbf{x}_ {0},\mathbf{u}_{0})-f^{\star})^{\frac{1}{2}},\] _and, if we choose \(\eta=\nicefrac{{c}}{{L_{\star}}}\), \(\beta=\nicefrac{{(4\sqrt{\kappa}-\sqrt{c})}}{{(4\sqrt{\kappa}+\tau\sqrt{c})}}\), then \(\mathbf{x}_{k},\mathbf{y}_{k}\in\mathcal{B}_{R_{\mathbf{x}}}^{(1)}\) and \(\mathbf{u}_{k},\mathbf{v}_{k}\in\mathcal{B}_{R_{\mathbf{u}}}^{(2)}\) for all \(k\in\mathbb{N}\), and Nesterov's recursion converges according to:_ \[f(\mathbf{x}_{k},\mathbf{u}_{k})-f^{\star}\leq 2\left(1-\frac{c}{4\sqrt{ \kappa}}\right)^{k}(f(\mathbf{x}_{0},\mathbf{u}_{0})-f^{\star}). \tag{5}\] Theorem 3 shows that, under Assumptions 1-6 with a sufficiently small \(G_{1}\) and \(G_{2}\) as in (4), Nesterov's iteration enjoys an accelerated convergence, as in (5). Moreover, the iterates of Nesterov's momentum \(\{(\mathbf{x}_{k},\mathbf{y}_{k})\}_{k=1}^{\infty}\) and \(\{(\mathbf{u}_{k},\mathbf{v}_{k})\}_{k=1}^{\infty}\) stay in a ball around initialization with radius in (4). To better interpret our result, we first focus on (4). By our choice of \(\beta\), we have that \(1-\beta=\Theta\left(\nicefrac{{1}}{{\sqrt{\kappa}}}\right)\) and \(1+\beta=\Theta\left(1\right)\). Therefore, the requirement of \(G_{1},G_{2}\) in (4) can be simplified to \(G_{1}^{4}\leq O\left(\nicefrac{{\mu^{\gamma/2}}}{{L_{1}^{3/2}L_{2}^{3}}}\right)\) and \(G_{1}^{2}G_{2}^{2}\leq O\left(\nicefrac{{\mu^{\beta/2}}}{{L_{1}^{3/2}L_{2}^{3 }}}\right)\). This simplified condition implies that we need a smaller \(G_{1}\) and \(G_{2}\), if \(\mu\) is small and if \(L_{1}\) and \(L_{2}\) are large. For the requirement on \(R_{\mathbf{x}}\) and \(R_{\mathbf{u}}\) in (4), we can simplify with \(\eta=O\left(\nicefrac{{1}}{{L_{1}}}\right)\) and \(\beta=\Theta\left(1\right)\). In this way, \(R_{\mathbf{x}}\) and \(R_{\mathbf{u}}\) reduce to \(\Omega\left(\nicefrac{{L_{1}^{3/4}L_{2}^{3/2}}}{\mu^{3/4}}\right)\cdot(f( \mathbf{x}_{0},\mathbf{u}_{0})-f^{\star})^{\frac{1}{2}}\) and \(\Omega\left(G_{1}\nicefrac{{L_{1}^{3/4}L_{2}}}{\mu^{7/4}}\right)\cdot(f( \mathbf{x}_{0},\mathbf{u}_{0})-f^{\star})^{\frac{1}{2}}\), respectively. Both quantities grow with a larger \(L_{1}\) and \(L_{2}\) and a smaller \(\mu\). Noticeably \(R_{\mathbf{u}}\) also scales with \(G_{1}\). Focusing on the convergence property in (5), we can conclude that Nesterov's momentum achieves an accelerated convergence rate of \(1-\Theta\left(\nicefrac{{1}}{{\sqrt{\kappa}}}\right)\) compared with the \(1-\Theta\left(\nicefrac{{1}}{{\kappa}}\right)\) rate in Theorem 2. We sketch the proof of Theorem 3 below. Sketch of Proof.: Setting \(\mathbf{y}_{-1}=\mathbf{y}_{0}\) and \(\mathbf{v}_{-1}=\mathbf{v}_{0}\), our proof utilize the following Lyapunov function: \[\phi_{k}=f(\mathbf{x}_{k},\mathbf{u}_{k})-f^{\star}+\mathcal{Q}_{1}\left\| \mathbf{z}_{k}-\mathbf{x}_{k-1}^{\star}\right\|_{2}^{2}+\frac{\eta}{8}\left\| \nabla_{1}f(\mathbf{y}_{k-1},\mathbf{v}_{k-1})\right\|_{2}^{2};\quad\mathbf{x }_{k}^{\star}=\operatorname*{arg\,min}_{\mathbf{x}\in\mathbb{R}^{d_{1}}}f( \mathbf{x},\mathbf{v}_{k}),\] with properly chosen \(\mathcal{Q}_{1}\) and \(\mathbf{z}_{k}\). The proof recursively establishes the following three properties: \[\left(1-\frac{c}{2\sqrt{\kappa}}\right)^{-1}\phi_{k+1}-\phi_{k}\leq\frac{c}{4 \sqrt{\kappa}}\left(1-\frac{c}{4\sqrt{\kappa}}\right)^{k}\phi_{0}; \tag{6}\] \[\left\|\mathbf{x}_{k}-\mathbf{x}_{k-1}\right\|_{2}^{2}\leq\mathcal{Q}_{2}\left( 1-\frac{c}{4\sqrt{\kappa}}\right)^{k}\phi_{0};\quad\left\|\mathbf{u}_{k}- \mathbf{u}_{k-1}\right\|_{2}^{2}\leq G_{1}^{2}\mathcal{Q}_{3}\left(1-\frac{c}{4 \sqrt{\kappa}}\right)^{k}\phi_{0}. \tag{7}\] By our choice of \(\phi_{k}\), Equation (6) implies that: \[f(\mathbf{x}_{k},\mathbf{u}_{k})-f^{\star}\leq\phi_{k}\leq\left(1-\frac{c}{4 \sqrt{\kappa}}\right)^{k}\phi_{0}, \tag{8}\] which further implies the convergence in Equation (5). The following lemma shows that, if Equation (5) holds for all \(k\leq\hat{k}\), we can guarantee Equation (7) with \(k=\hat{k}+1\). **Lemma 5**.: _Let the Assumptions of Theorem 3 hold. If Equation (8) holds for all \(k\leq\hat{k}\), then Equation (7) holds for \(k=\hat{k}+1\) with \(\mathcal{Q}_{2}=\frac{6\eta L_{2}(L_{2}+1)}{1-\beta}\) and \(\mathcal{Q}_{3}=\frac{6\eta L_{2}(L_{2}+1)(1+\beta)^{3}}{\mu\beta(1-\beta)^{3}}\)._ With Lemma 5, we can first guarantee that up to iteration \(\hat{k}+1\), \(\mathbf{x}_{k}\) and \(\mathbf{u}_{k}\) stay in \(\mathcal{B}_{R_{\mathbf{x}}}^{(1)}\) and \(\mathcal{B}_{R_{\mathbf{u}}}^{(2)}\). Thus, we can utilize Assumptions 1-6 in showing Equation (6) for iteration \(\hat{k}+1\). Moreover, the bound on \(\left\|\mathbf{u}_{k}-\mathbf{u}_{k-1}\right\|_{2}\) connects the following lemma to Equation (6). **Lemma 6**.: _Let the Assumptions of Theorem (3) hold. Then, we have:_ \[\left(1-\frac{c}{2\sqrt{\kappa}}\right)^{-1}\phi_{k+1}-\phi_{k}\leq c\beta^{2} \sqrt{\kappa}\left(G_{1}^{2}L_{2}+\frac{8\mathcal{Q}_{1}G_{1}^{2}G_{2}^{2}}{\mu^{2 }}\right)\left\|\mathbf{u}_{k}-\mathbf{u}_{k-1}\right\|_{2}^{2}\] Lemmas 5, 6 together with a small enough \(G_{1}\) and \(G_{2}\) imply Equation (6). Examples We consider two examples that satisfy Assumptions 1-6. By enforcing the requirement in Theorem 3, we show that, although being nonconvex and possibly non-smooth, the two models enjoy accelerated convergence when trained with Nesterov's momentum. ### Toy example: Additive model Given matrices \(\mathbf{A}_{1}\in\mathbb{R}^{m\times m},\mathbf{A}_{2}\in\mathbb{R}^{m\times d}\) and a non-linear function \(\sigma:\mathbb{R}^{m}\rightarrow\mathbb{R}^{m}\), we consider \(h(\mathbf{x},\mathbf{u})=\mathbf{A}_{1}\mathbf{x}+\sigma(\mathbf{A}_{2} \mathbf{u})\) as the summation of a linear model and a non-linear model. If we train \(h(\mathbf{x},\mathbf{u})\) over the loss \(g(\mathbf{s})=\frac{1}{2}\left\|\mathbf{s}-\mathbf{b}\right\|_{2}^{2}\) for some label \(\mathbf{b}\in\mathbb{R}^{m}\), the objective can be written as \[f(\mathbf{x},\mathbf{u})=g(\mathbf{A}_{1}\mathbf{x}+\sigma\left(\mathbf{A}_{2 }\mathbf{u}\right))=\frac{1}{2}\left\|\mathbf{A}_{1}\mathbf{x}+\sigma\left( \mathbf{A}_{2}\mathbf{u}\right)-\mathbf{b}\right\|_{2}^{2}. \tag{9}\] Due to the non-linearity of \(\sigma\), \(f(\mathbf{x},\mathbf{u})\) is in general non-convex. If we further choose \(\sigma\) to be some non-smooth function such as ReLU, i.e., \(\sigma(\mathbf{x})_{i}=\max\{0,x_{i}\}\), the objective can also be non-smooth. Yet, assuming that \(\sigma\) is Lipschitz, we can show that \(f(\mathbf{x},\mathbf{u})\) satisfies Assumptions 1-6. **Lemma 7**.: _Suppose that \(\sigma\) is \(B\)-Lipschitz and \(\sigma_{\min}(\mathbf{A}_{1})>0\). Then \(f(\mathbf{x},\mathbf{u})\) satisfies Assumptions 1-6 with_ \[\begin{split} R_{\mathbf{x}}=& R_{\mathbf{u}}= \infty;\;\mu=\sigma_{\min}\left(\mathbf{A}_{1}\right)^{2};\;L_{1}=\sigma_{ \max}\left(\mathbf{A}_{1}\right)^{2};\;L_{2}=1\\ G_{1}=& B\sigma_{\max}\left(\mathbf{A}_{2}\right); \;G_{2}=B\sigma_{\max}\left(\mathbf{A}_{1}\right)\sigma_{\max}\left(\mathbf{A }_{2}\right).\end{split} \tag{10}\] Notice that while \(\mu\) and \(L_{1}\) depend entirely on the property of \(\mathbf{A}_{1}\), both \(G_{1}\) and \(G_{2}\) can be made smaller by choosing \(\mathbf{A}_{2}\) with a small enough \(\sigma_{\max}(\mathbf{A}_{2})\). Intuitively, this means that \(G_{1}\) and \(G_{2}\) can be controlled, as long as the component that introduces the non-convexity and non-smoothness \(\sigma\left(\mathbf{A}_{2}\mathbf{u}\right)\) is small enough. Therefore, we can apply Theorem 3 to the minimization of Equation (9). **Theorem 4**.: _Consider the problem in Equation (9) and assume using Nesterov's momentum to find the minimizer of \(f(\mathbf{x},\mathbf{u})\), where \(\sigma\) is a \(B\)-Lipschitz function. Let \(\kappa=\nicefrac{{\sigma_{\max}\left(\mathbf{A}_{1}\right)^{2}}}{{\sigma_{ \min}\left(\mathbf{A}_{1}\right)^{2}}}\), and suppose:_ \[\sigma_{\min}\left(\mathbf{A}_{1}\right)\geq\tilde{C}\sigma_{\max}\left( \mathbf{A}_{2}\right)B\kappa^{0.75}, \tag{11}\] _for some large enough constant \(\tilde{C}>0\). Then there exists a constant \(c>0\) such that, if we choose \(\eta=\nicefrac{{c}}{{\sigma_{\max}\left(\mathbf{A}_{1}\right)^{2}}}\) and \(\beta=\nicefrac{{(4\sqrt{\kappa}-\sqrt{c})}}{{(4\sqrt{\kappa}+7\sqrt{c})}}\), Nesterov's momentum in Equation (3) converges according to:_ \[f(\mathbf{x}_{k},\mathbf{u}_{k})\leq 2\left(1-\frac{c}{4\sqrt{\kappa}}\right)^{k }f(\mathbf{x}_{0},\mathbf{u}_{0}).\] Notice that the requirement in Equation (11) favors a larger \(\sigma_{\min}(\mathbf{A}_{1})\) and smaller \(\sigma_{\max}(\mathbf{A}_{2}),B\) and \(\kappa\). Using this example, we empirically verify the theoretical result of Theorem 3, as shown in Figure 1. The three rows correspond to the cases of \(\sigma_{\max}\left(\mathbf{A}_{2}\right)\in\{0.01,0.3,5\}\), respectively. The lines in the plots denote the average loss/distance among 10 trials, while the shaded region denotes the standard deviation. Observing the plots in the left column, we can see that in all three cases, Nesterov's momentum achieves a faster convergence compared with gradient descent, while the case with the largest \(\sigma_{\max}\left(\mathbf{A}_{2}\right)\) introduces the largest variance between results of the ten trials. Recall that \(\sigma_{\max}\left(\mathbf{A}_{2}\right)\) in our example controls the magnitude of \(G_{1}\) and \(G_{2}\). Thus, this phenomenon shows that when \(G_{1}\) and \(G_{2}\) become larger, the theoretical guarantee in Theorem 3 begins to break down and the result depends more on the initialization. Figures in the right column plots the evolution of \(\|\mathbf{x}_{k}-\mathbf{x}_{k-1}\|_{2}^{2}\) and \(\|\mathbf{u}_{k}-\mathbf{u}_{k-1}\|_{2}^{2}\). All three figures show that the two quantity decrease linearly. This phenomenon supports the linear decay of the two quantities, as shown in Lemma 5. Moreover, as \(\sigma_{\max}\left(\mathbf{A}_{2}\right)\) increases, the relative magnitude of \(\|\mathbf{u}_{k}-\mathbf{u}_{k-1}\|_{2}^{2}\) to \(\|\mathbf{x}_{k}-\mathbf{x}_{k-1}\|_{2}^{2}\) also increase (the line of "\(\|\Delta\mathbf{u}_{k}\|_{2}\)" get closer to "\(\|\Delta\mathbf{x}_{k}\|_{2}\)"). This corresponds to the fact that \(\|\mathbf{u}_{k}-\mathbf{u}_{k-1}\|_{2}^{2}\) scales with \(G_{1}\), as shown in Lemma 5. ### Deep ReLU Neural Networks Consider the \(\Lambda\)-layer ReLU neural network with layer widths \(\{d_{\ell}\}_{\ell=0}^{\Lambda}\). Denoting the number of training samples with \(n\), we consider the input and label of the training data given by \(\mathbf{X}\in\mathbb{R}^{n\times d_{0}}\) and \(\mathbf{Y}\in\mathbb{R}^{n\times d_{\Lambda}}\). Let the weight matrix in the \(\ell\)-th layer be \(\mathbf{W}_{\ell}\). We use \(\boldsymbol{\theta}=\{\mathbf{W}_{\ell}\}_{\ell=1}^{\Lambda}\) to denote the collection of all weights and \(\sigma(\mathbf{A})_{ij}=\max\{0,a_{ij}\}\) to denote the ReLU function. Then, the output of each layer is given by: \[\mathbf{F}_{\ell}\left(\boldsymbol{\theta}\right)=\begin{cases}\mathbf{X},& \text{if }\ell=0;\\ \sigma\left(\mathbf{F}_{\ell-1}\left(\boldsymbol{\theta}\right)\mathbf{W}_{ \ell}\right),&\text{if }\ell\in[\Lambda-1];\\ \mathbf{F}_{\Lambda-1}\left(\boldsymbol{\theta}\right)\mathbf{W}_{\Lambda},& \text{if }\ell=\Lambda.\end{cases} \tag{12}\] We consider the training of \(\mathbf{F}_{\Lambda}\left(\boldsymbol{\theta}\right)\) over the MSE loss, as in \(\mathcal{L}(\boldsymbol{\theta})=\frac{1}{2}\left\|\mathbf{F}_{\Lambda} \left(\boldsymbol{\theta}\right)-\boldsymbol{Y}\right\|_{F}^{2}\). We can interpret the scenario using our partition model in (1). Let \(g(\mathbf{s})=\frac{1}{2}\left\|\mathbf{s}-\mathtt{V}\left(\mathbf{Y}\right) \right\|_{2}^{2}\). If we partition the parameter \(\boldsymbol{\theta}\) into \(\mathbf{x}=\mathtt{V}\left(\mathbf{W}_{\Lambda}\right)\) and \(\mathbf{u}=\left(\mathtt{V}\left(\mathbf{W}_{1}\right),\ldots,\mathtt{V}\left( \mathbf{W}_{\Lambda-1}\right)\right)\), we can write: \[h(\mathbf{x},\mathbf{u})=\mathtt{V}\left(\mathbf{F}_{\Lambda}\left(\boldsymbol {\theta}\right)\right);\quad f(\mathbf{x},\mathbf{u})=\frac{1}{2}\left\| \mathtt{V}\left(\mathbf{F}_{\Lambda}\left(\boldsymbol{\theta}\right)\right)- \mathtt{V}\left(\mathbf{Y}\right)\right\|_{2}^{2}=\frac{1}{2}\left\|\mathbf{F} _{\Lambda}\left(\boldsymbol{\theta}\right)-\mathbf{Y}\right\|_{F}^{2}. \tag{13}\] For some given \(\mathbf{x}\) and \(\mathbf{u}\), we let \(\mathbf{W}_{\Lambda}(\mathbf{x})\) be the matrix such that \(\mathbf{x}=\mathtt{V}\left(\mathbf{W}_{\Lambda}(\mathbf{x})\right)\); similarly, we let \(\mathbf{W}_{\ell}(\mathbf{u})\) with \(\ell\in[L-1]\) be the matrices such that \(\mathbf{u}=\left(\mathtt{V}\left(\mathbf{W}_{1}(\mathbf{u})\right),\ldots, \mathtt{V}\left(\mathbf{W}_{\Lambda-1}(\mathbf{u})\right)\right)\). Denote \(\lambda_{\Lambda}=\sup_{\mathbf{x}\in\mathcal{B}_{\mathbf{Rx}}^{(1)}}\sigma_{ \max}\left(\mathbf{W}_{\Lambda}(\mathbf{x})\right)\) and \(\lambda_{\ell}=\sup_{\mathbf{u}\in\mathcal{B}_{\mathbf{Rx}}^{(2)}}\sigma_{ \max}\left(\mathbf{W}_{\ell}(\mathbf{u})\right)\) for \(\ell\in[\Lambda-1]\). Moreover, denote \(\lambda_{i\to j}=\prod_{\ell=i}^{j}\lambda_{\ell}\). Then we can show that \(f(\mathbf{x},\mathbf{u})\) defined in (13) satisfies Assumptions 1-6. **Lemma 8**.: _Let \(\boldsymbol{\theta}(0)\) be the initialization of the ReLU network in Equation (12) and \(\alpha_{0}=\sigma_{\min}\left(\mathbf{F}_{\Lambda-1}\left(\boldsymbol{\theta}(0) \right)\right)\). Assume that each \(\mathbf{W}_{\ell}\) is initialized such that \(\|\mathbf{W}_{\ell}(0)\|_{2}\leq\frac{\lambda_{\ell}}{2}\) and \(\alpha_{0}>0\). Then, \(f(\mathbf{x},\mathbf{u})\) satisfies Assumptions 1-6 with:_ \[R_{\mathbf{x}}\leq\tfrac{\lambda_{\Lambda}}{2};\;R_{\mathbf{u}}\leq\tfrac{1}{2} \min_{\ell\in[\Lambda-1]}\lambda_{\ell}\cdot\min\left\{1,\tfrac{\alpha_{0}}{2 \sqrt{\Lambda}\|\mathbf{X}\|_{F}\lambda_{1\to\Lambda-1}}\right\};\;\mu=\tfrac{ \alpha_{0}^{2}}{2};\] \[L_{1}=\left\|\mathbf{X}\right\|_{F}^{2}\lambda_{1\to\Lambda-1}^{2};\;L_{2}=1;\;G_ {1}= \tfrac{\sqrt{\Lambda}\|\mathbf{X}\|_{F}\lambda_{1\to\Lambda-1}}{\min_{ \ell\in[\Lambda-1]}\lambda_{\ell}};\;G_{2}=\left(2\left\|\mathbf{X}\right\|_{F} \lambda_{1\to\Lambda}+\left\|\mathbf{Y}\right\|_{F}\right)G_{1}\] As shown in Lemma 3.3 by Nguyen Nguyen (2021), with sufficient over-parameterization, we can guarantee that \(\alpha_{0}>0\). In order to show the accelerated convergence of Nesterov's momentum when training 12, we need to \(i)\) guarantee that the condition of \(R_{\mathbf{x}}\) and \(R_{\mathbf{u}}\) in (4) satisfies the upper bound in Lemma 8, and \(ii)\) the quantities \(\mu,L_{1},L_{2},G_{1}\) and \(G_{2}\) defined in Lemma 8 satisfy the requirement in (4). Enforcing the two conditions with sufficient over-parameterization gives us the following theorem. **Theorem 5**.: _Consider training the ReLU neural network in Equation (12) using the MSE loss, or equivalently, minimizing \(f(\mathbf{x},\mathbf{u})\) defined in Equation (13) with Nesterov's momentum:_ \[\mathbf{W}_{\ell}(k+1) =\hat{\mathbf{W}}_{\ell}(k)-\eta\nabla_{\mathbf{W}_{\ell}}\mathcal{ L}(\hat{\boldsymbol{\theta}}(k))\] \[\hat{\mathbf{W}}_{\ell}(k+1) =\mathbf{W}_{\ell}(k+1)+\beta(\mathbf{W}_{\ell}(k+1)-\mathbf{W}_{ \ell}(k))\] _with \(\hat{\mathbf{\theta}}(k+1)=\{\hat{\mathbf{W}}_{\ell}(k+1)\}_{\ell=1}^{\Lambda},\eta= \nicefrac{{c}}{{L_{1}}}\) and \(\beta=\frac{4\sqrt{\kappa-\sqrt{c}}}{4\sqrt{\kappa+\gamma}\zeta}\), where \(\kappa=\nicefrac{{2L_{1}}}{{a_{0}^{2}}}\) and \(\alpha_{0},L_{1}\) as defined in Lemma 8. If the width of the neural network satisfies:_ \[d_{\ell}=\Theta\left(m\right)\quad\forall\ell\in[\Lambda-1];\quad d_{L-1}= \Theta\left(n^{4}m^{2}\right), \tag{14}\] _for some \(m\geq\max\{d_{0},d_{\Lambda}\}\), and we initialize the weights according to:_ \[\left[\mathbf{W}_{\ell}(0)\right]_{ij}\sim\mathcal{N}\left(0,d_{\ell-1}^{-1} \right),\quad\forall\ell\in[\Lambda-1];\quad\left[\mathbf{W}_{\ell}(0)\right] _{ij}\sim\mathcal{N}\left(0,d_{\Lambda-1}^{-\frac{3}{2}}\right).\] _Then, with high probability over the initialization, there exists an absolute constant \(c>0\) such that:_ \[\mathcal{L}\left(\mathbf{\theta}(k)\right)\leq 2\left(1-\tfrac{c}{4\sqrt{\kappa}} \right)^{k}\mathcal{L}(\mathbf{\theta}(0)). \tag{15}\] As in prior work Nguyen (2021), we treat the depth of the neural network \(\Lambda\) to be a constant when computing the over-parameterization requirement. Next, we compare our result with Theorem 2.2 and Corollary 3.2 of Nguyen (2021). To start, in order to deal with the additional complexity of Nesterov's momentum as introduced in Section 4.2, our over-parameterization of \(d_{\Lambda-1}=\Theta\left(n^{4}m^{2}\right)\) is slightly larger than the over-parameterization of \(d_{\Lambda-1}=\Theta\left(n^{3}m^{3}\right)\) in Corollary 3.2 of Nguyen (2021). Moreover, in Theorem 2.2 of Nguyen (2021), since their choice of \(\eta\) is also \(O\left(\nicefrac{{1}}{{L_{1}}}\right)\), the convergence rate they achieve for gradient descent is \(1-\Theta\left(\nicefrac{{1}}{{\kappa}}\right)\). Compared with this rate, Theorem 5 achieves a faster convergence of \(1-\Theta\left(\nicefrac{{1}}{{\sqrt{\kappa}}}\right)\). This shows that Nesterov's momentum enjoys acceleration when training deep ReLU neural networks. ## 6 Conclusion and Broader Impact We consider the minimization of a new class of objective functions, namely the partition model, where the set of parameters can be partitioned into two groups, and the function is smooth and strongly convex with respect to only one group. This class of objectives is more general than the class of smooth and strongly convex functions. We prove the convergence of both gradient descent and Nesterov's momentum on this class of objectives under certain assumptions and show that Nesterov's momentum achieves an accelerated convergence rate of \(1-\Theta\left(\nicefrac{{1}}{{\sqrt{\kappa}}}\right)\) compared with the \(1-\Theta\left(\nicefrac{{1}}{{\kappa}}\right)\) convergence rate of gradient descent. Moreover, we considered the training of the additive model and deep ReLU networks as two concrete examples of the partition model and showed the acceleration of Nesterov's momentum when training these two models. Our work bears two limitations. First, since we do not make convexity or smoothness assumptions for the subset \(\mathbf{u}\) of parameters, we have to make a strong assumption of the optimality condition as in Assumption 6. It is possible that this assumption can be relaxed if one assumes a better structure on \(\mathbf{u}\). Second, in the examples we provided, we only considered simple parameter partition schemes including partition by submodel as in Section Figure 1: Experiment of learning additive model with gradient descent and Nesterov’s momentum. 5.1 and partition by layer as in Section 5.2. Considering the complexity of modern neural networks, it remains a valid open question whether more complicated partition schemes, such as selecting a subset of the weights in each layer, will still satisfy the assumptions in our framework. Future works can focus on resolving these two limitations, as well as extending the analysis to different neural network architectures. This paper focuses mainly on the theoretical understanding of optimization and machine learning methods. Due to the nature of our work, We do not anticipate negative social impacts. Further impact on society depends on the application of machine learning models.
2310.15872
KirchhoffNet: A Scalable Ultra Fast Analog Neural Network
In this paper, we leverage a foundational principle of analog electronic circuitry, Kirchhoff's current and voltage laws, to introduce a distinctive class of neural network models termed KirchhoffNet. Essentially, KirchhoffNet is an analog circuit that can function as a neural network, utilizing its initial node voltages as the neural network input and the node voltages at a specific time point as the output. The evolution of node voltages within the specified time is dictated by learnable parameters on the edges connecting nodes. We demonstrate that KirchhoffNet is governed by a set of ordinary differential equations (ODEs), and notably, even in the absence of traditional layers (such as convolution layers), it attains state-of-the-art performances across diverse and complex machine learning tasks. Most importantly, KirchhoffNet can be potentially implemented as a low-power analog integrated circuit, leading to an appealing property -- irrespective of the number of parameters within a KirchhoffNet, its on-chip forward calculation can always be completed within a short time. This characteristic makes KirchhoffNet a promising and fundamental paradigm for implementing large-scale neural networks, opening a new avenue in analog neural networks for AI.
Zhengqi Gao, Fan-Keng Sun, Ron Rohrer, Duane S. Boning
2023-10-24T14:28:00Z
http://arxiv.org/abs/2310.15872v3
# KirchhoffNet: A Circuit ###### Abstract In this paper, we exploit a fundamental principle of analog electronic circuitry, Kirchhoff's current law, to introduce a unique class of neural network models that we refer to as _KirchhoffNet_. KirchhoffNet establishes close connections with message passing neural networks and continuous-depth networks. We demonstrate that even in the absence of any traditional layers (such as convolution, pooling, or linear layers), KirchhoffNet attains 98.86% test accuracy on the MNIST dataset, comparable with state of the art (SOTA) results. What makes KirchhoffNet more intriguing is its potential in the realm of hardware. Contemporary deep neural networks are conventionally deployed on GPUs. In contrast, KirchhoffNet can be physically realized by an analog electronic circuit. Moreover, we justify that irrespective of the number of parameters within a KirchhoffNet, its forward calculation can always be completed within \(1/f\) seconds, with \(f\) representing the hardware's clock frequency. This characteristic introduces a promising technology for implementing ultra-large-scale neural networks. Machine Learning, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Neural Networks, Convolutional Neural Networks, Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Neural Networks, Convolutional Neural Networks, Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Convolutional Neural Networks, Neural Networks, Convolutional ## 2 Method ### KCL as Message Passing Criterion KirchhoffNet or a circuit 2 is best described by a directed graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}=\{n_{0},n_{1},\cdots,n_{N}\}\) represents \((N+1)\) nodes, and \(\mathcal{E}\) specifies the connections (with directions) among nodes. The \(n\)-th node is associated with a scalar _nodal voltage_\(v_{n}\in\mathbb{R}\), where \(n=\{0,1,\cdots,N\}\). Notably, we emphasize that the node \(n_{0}\) holds a special designation and is referred to as a _ground node_, as its associated \(v_{0}\) is fixed to \(0\), following the convention of circuit theory (Desoer & Kuh, 1969; Van Valkenburg, 1974). For later simplicity, we denote \(\mathbf{v}=[v_{1},v_{2},\cdots,v_{N}]^{T}\in\mathbb{R}^{N}\). If the set \(\mathcal{E}\) contains an edge from the source node \(n_{s}\) to the destination node \(n_{d}\), then a _branch current_\(i_{sd}\in\mathbb{R}\) will be generated flowing from \(n_{s}\) to \(n_{d}\). Now, KCL states that: Footnote 2: In this paper, we will interchangeably use these two terms. \[\sum_{s\in\mathcal{N}^{s}(n_{j})}i_{sj}=\sum_{d\in\mathcal{N}^{d}(n_{j})}i_{jd} \tag{1}\] where \(\mathcal{N}^{s}(n_{j})\) represents all nodes that are connected to \(n_{j}\) and are also the source nodes for the connections. Similarly, \(\mathcal{N}^{d}(n_{j})\) represents all nodes that are connected to \(n_{j}\) and are the destination. In other words, KCL states that the sum of branch currents flowing into the node \(n_{j}\), i.e., the left-hand side of Eq. (1), equals the sum of branch currents flowing out of the node \(n_{j}\), i.e., the right-hand side of Eq. (1). Physically, the branch current \(i_{sd}\) is generated by a _device_ connecting from \(n_{s}\) to \(n_{d}\), and the value of \(i_{sd}\) is related to the device parameter \(\theta\). For instance, several straightforward current-voltage relations are: \[\text{Source branch:}\quad i_{sd}=\theta \tag{2}\] \[\text{Conductive branch:}\quad i_{sd}=\theta(v_{s}-v_{d})\] (3) \[\text{Capacitive branch:}\quad i_{sd}=\theta(\dot{v}_{s}-\dot{v}_{ d}) \tag{4}\] where we use \(\dot{v}=dv/dt\) to represent the derivative of a nodal voltage \(v\) with respect to the time variable \(t\). 3 Drawing inspiration from circuit theory (Desoer & Kuh, 1969; Van Valkenburg, 1974), we know that such a KirchhoffNet \(\mathcal{G}\) with device current-voltage relation shown in Eq. (3) and governed by KCL shown in Eq. (1) can be formally solved by \(\mathbf{C}\mathbf{v}+\mathbf{G}\mathbf{v}=\mathbf{0}\), where \(\{\mathbf{C}\in\mathbb{R}^{N\times N},\mathbf{G}\in\mathbb{R}^{N\times N}\}\) are determined by all parameters \(\theta\) on all edges (devices). Footnote 3: Note that the first, the second, and the third line are respectively a current source, a conductor and a capacitance with parameter value equal to \(\theta\). At this point, we make three significant comments: (i) such a system has long been studied in elementary circuit theory, called an _RC circuit_, and is physically realizable. (ii) It has an ordinary differential equation (ODE) form and naturally coincides with the formulation of continuous-depth neural networks (Chen et al., 2018), if we view the nodal voltage \(\mathbf{v}\) as the variable. (iii) An RC circuit alone does not have enough expressiveness power, as the function governing the ODE dynamics is linear in \(\mathbf{v}\). Now, to improve the representation capability, we deliberately introduce non-linear current-voltage relations and design our KirchhoffNet as Figure 1 shows. Specifically, we enforce that all nodes \(\{n_{1},n_{2},\cdots,n_{N}\}\) connect to the ground node \(n_{0}\) via a capacitive device shown in Eq. (4) with parameter value \(\theta=1\). We emphasize that there will be \(N\) such capacitive devices in total, and all their parameter values \(\theta\) are not learnable and are always fixed to \(1\) in training. Then, for the rest of the connections among nodes specified by \(\mathcal{E}\), their parameters are all learnable and have the same non-linear current-voltage relation \(i_{sd}=g(v_{s},v_{d},\boldsymbol{\theta}_{sd})\), where \(\boldsymbol{\theta}_{sd}\) represents the learnable parameter, and \(g(\cdot,\cdot,\cdot)\) is a non-linear function. It can be shown, the dynamics of \(v_{j}\) (where \(j=1,2,\cdots,N\)) follows: \[\begin{split}\theta\dot{v}_{j}=1\times\dot{v}_{j}& =\sum_{s\in\mathcal{N}^{s}(n_{j})}g(v_{s},v_{j},\boldsymbol{\theta }_{sj})\\ &\quad-\sum_{d\in\mathcal{N}^{d}(n_{j})}g(v_{j},v_{d},\boldsymbol {\theta}_{jd})\\ &=\sum_{s\in\mathcal{N}^{s}(n_{j})}i_{sj}-\sum_{d\in\mathcal{N}^ {d}(n_{j})}i_{dj}\end{split} \tag{5}\] Our present implementation favors the function \(g\): \[g(v_{s},v_{d},\boldsymbol{\theta}_{sd})=\theta_{sd,1}\times\text{ReLU}(v_{s}-v_ {d}-\theta_{sd,2}) \tag{6}\] where \(\boldsymbol{\theta}_{sd}=[\theta_{sd,1},\theta_{sd,2}]^{T}\). To clarify, the voltage values at \(t=0\) and \(t=1\), denoted as \(\mathbf{v}(0)\) and \(\mathbf{v}(1)\), are respectively the KirchhoffNet input and output. Except for those \(\theta=1\) on the fixed capacitive branches, all other \(\boldsymbol{\theta}\)'s on the non-linear branches are the learnable parameters, which govern how \(\mathbf{v}(0)\) changes to \(\mathbf{v}(1)\). Importantly, multiple non-linear connections can exist between two nodes \(n_{s}\) and \(n_{d}\). Important Remarks.In a broader context, KirchhoffNet can be seen as a fusion of continuous-depth model (Chen et al., 2018) with message passing neural networks Figure 1: An illustration of KirchhoffNet. The dashed grey lines represent fixed capacitive branches, while the solid black lines represent learnable non-linear branches. (MPNNs) (Gilmer et al., 2017). However, there are crucial distinctions that we would like to highlight. First, the message function in an MPNN is usually parameterized by a deep neural network with discrete layers containing many parameters, while the counterpart in our KirchhoffNet is the function \(g\), which only has two parameters. Second, a node in MPNN can have a vectorial node feature, while in our case, each node only has a scalar nodal voltage. 4 Third, KirchhoffNet does not have traditional layers such as convolution or linear layers. Fourth, it is clear that KirchhoffNet can exactly map to a real circuit, which neither of Neural ODE or MPNNs can. For detailed discussions on hardware potentials of KirchhoffNet, please refer to Section 2.2. Footnote 4: To mitigate this limitation in KirchhoffNet, we can overcome it through a stacking approach. For instance, we can employ a KirchhoffNet with 21 nodes and utilize \([v_{2n-1},v_{2n}]^{T}\) as the node feature for the \(n\)-th node. In this setup, with \(n\) ranging from 1 to 10, we achieve a 2-dimensional nodal feature. Now that we have defined KirchhoffNet, our focus shifts to how it can be effectively trained. In neural ordinary differential equation (Neural ODE) (Chen et al., 2018), it has been demonstrated that the adjoint method can be employed to efficiently compute the derivative \(dL/d\boldsymbol{\theta}\) of a model in the form \(\dot{\mathbf{v}}=f(\mathbf{v},t,\boldsymbol{\theta})\), where \(L\) represents a loss function, \(f\) is a deep neural network with learnable parameters \(\boldsymbol{\theta}\). Obviously, Eq. (5) satisfies this format, and thus, adjoint method will be applicable to KirchhoffNet. However, what is surprising and unknown before in our context is that we can prove that the adjoint ODE system (Chen et al., 2018) reduces to a circuit of the same topology as the original circuit, which is summarized by the following theorem. **Theorem 2.1**.: _For a KirchhoffNet with topology \(\mathcal{G}=(\mathcal{V},\mathcal{E})\). Its adjoint circuit has exactly the same topology \(\mathcal{G}^{\text{adj}}=(\mathcal{V},\mathcal{E})\), but all the branches defined in Eq. (3) become conductive. For a specific device connecting from \(n_{s}\) to \(n_{d}\) in \(\mathcal{G}^{\text{adj}}\), its parameter \(\theta_{sd}^{\text{adj}}\in\mathbb{R}\) is defined by the nodal voltage \(\{v_{s},v_{d}\}\) and the branch current \(i_{sd}\) in the original KirchhoffNet \(\mathcal{G}\):_ \[\theta_{sd}^{\text{adj}}=\frac{di_{sd}}{d(v_{s}-v_{d})} \tag{7}\] _Simulating \(\mathcal{G}\) forward in time and \(\mathcal{G}^{\text{adj}}\) backward in time, yields the loss gradient._ ### Hardware Implementation and Potentials The most captivating aspect of KirchhoffNet is its profound connection to physical reality. In the real world, we must account for physical units, typically represented in SI units, where time is measured in seconds (s), voltage in volts (V), current in amperes (A), and capacitance in farads (F). Our formulation in Section 2.1 essentially affirms that a KirchhoffNet can be realized by a physical circuit. This circuit would involve \(N\) fixed capacitances, each with a value of 1 F, along with many learnable non-linear current-voltage devices. When we set the initial nodal voltage at \(t=0\) s and let the circuit evolve to \(t=1\) s, the forward calculation is completed within this time frame. A one-second forward calculation does not sound appealing at all. But the key lies in reducing the capacitance value which can make the forward calculation much faster. To understand this mathematically, we take the simplest method for solving ODEs, the forward Euler method, as an illustration. Essentially, the forward calculation of KirchhoffNet needs to solve an ODE system shown in Eq. (5) from \(t=0\) s to \(t=1\) s. Forward Euler method accomplishes this by using numerical differentiation \(\dot{v}\approx[v(t+\Delta t)-v(t)]/\Delta t\). Substituting this expression to the left-hand side of Eq. (5) and simplifying the expression, we obtain: \[v_{j}(t+\Delta t)=v_{j}(t)+\frac{\Delta t}{\theta}\left[\sum_{s\in\mathcal{N}^ {s}(n_{j})}i_{sj}-\sum_{d\in\mathcal{N}^{d}(n_{j})}i_{dj}\right] \tag{8}\] where the terms inside the bracket are dependent on time \(t\). Thus, when the circuit is solved at time \(t\), Eq. (8) determines the circuit at time \((t+\Delta t)\). The step size \(\Delta t\) usually has to be very small for the solution to be accurate. We notice that if we simultaneously reduce \(\Delta t\), \(t\), and \(\theta\) by the same scale \(a\), i.e., \(\Delta t\rightarrow\Delta t/a\), \(t\to t/a\), and \(\theta\rightarrow\theta/a\), where \(a\gg 1\), then the \(v_{j}\) at \(t\) originally will now equal the \(v_{j}\) at \(t/a\), i.e., \(v_{j}(t)\to v_{j}(t/a)\). Alternatively, this can be understood as simultaneously rescaling the units of time and capacitance. For example, when \(a=10^{9}\), we are effectively using nanofarad (nF) as the unit of capacitance, where \(1\) nF = \(10^{-9}\) F, and nanosecond (ns) as the unit of time, where \(1\) ns = \(10^{-9}\) s. Remarkably, this scenario implies that if we impose the initial nodal voltages on the circuit at \(t=0\) ns, and after letting it run to \(t=1\) ns, it finishes the forward calculation. So far, it is evident that if we can simulate a non-unit KirchhoffNet in software to solve a task using \(\mathbf{v}(0)\) as the input and \(\mathbf{v}(1)\) as the output, this corresponds to a real physical system capable of completing the forward calculation within various timescales, such as one second (s), one millisecond (ms), one microsecond (\(\upmu\)s), one nanosecond (ns), or one picosecond (ps). The specific timescale depends on the capacitance value, whether it is one farad (F), one millifarad (mF), one microfarad (\(\upmu\)F), one nanofarad (nF), or one picofarad (pF). In theory, there is no apparent reason why this scalability cannot be pushed further. What's even more intriguing is that **this statement remains valid, regardless of the number of parameters in KirchhoffNet.** To complete the picture, it is worth noting that a one-picofarad (1 pF) capacitance is physically achievable using modern technology, while measuring nodal voltages at the picosecond (ps) timescale may be challenging. Therefore, ultimately, a physically realized KirchhoffNet will be able to complete one forward calculation in \(1/f\) seconds, with \(f\) representing the minimum possible operation frequency (also known as the clock frequency) of a hardware system (e.g., \(f=3\) Ghz). Lastly, a critical consideration is the physical implementability of the non-linear current-voltage relation \(g\). In our case, Eq. (6) corresponds to connecting a conductance and a current in parallel, followed by connecting them in series to a one-sided switch, as shown in Figure 2. This high-level schematic outlines a promising approach for realizing KirchhoffNet with a physical system, but we note that substantial engineering and exploration of approximate implementations (e.g., using standard MOS transistors) will be needed. ## 3 Numerical Results We have demonstrated various intriguing properties of KirchhoffNet; however, its representational capabilities remain unverified. Naturally, this depends significantly on the topology \(\mathcal{E}\) of the circuit. Through careful topology design, KirchhoffNet can perform common tasks typically handled by traditional deep neural networks, even in the absence of classical layers like linear or convolutional layers. Take image classification on MNIST (LeCun et al., 1998) as an example. MNIST comprises gray-scale handwritten digits from 0 to 9 with a resolution of \(28\times 28\) pixels. We design a KirchhoffNet with \(835\) nodes for this task: * Node \(n_{0}\) acts as the ground node, with its voltage \(v_{0}\) fixed to \(0\) regardless of time \(t\). * All \(28^{2}=784\) pixels are flattened and taken as the initial nodal voltages for node \(n_{1}\) to \(n_{784}\). * Nodes \(n_{785}\) to \(n_{824}\) are introduced to enhance the representational power of the ODE, inspired by (Dupont et al., 2019). Their nodal voltages at \(t=0\) are initialized to zeros. * Nodes \(n_{825}\) to \(n_{834}\) have initial nodal voltages set to zeroes, and their values at time \(t=1\) are used to calculate the cross-entropy loss. Our intuition behind this design is that we use \(n_{1}\) to \(n_{784}\) to learn a good feature map of the digit, that the last 10 nodes (\(n_{825}\sim n_{834}\)) serve as class predictions, and that the middle 40 nodes (\(n_{785}\sim n_{824}\)) are used to boost model capacity. We trained this KirchhoffNet using a learning rate of \(10^{-4}\), an AdamW optimizer, and 250 epochs, with data augmentation. The resulting model achieved an accuracy of \(98.86\)% on the test dataset, which is comparable to the state-of-the-art performance of convolution-based neural networks (LeCun et al., 1998; Chen et al., 2018; Dupont et al., 2019). ## 4 Conclusions In this paper, we define a novel class of neural network models, termed _KirchhoffNet_, based on a fundamental principle of analog electronic circuitry, Kirchhoff's current law (KCL). KirchhoffNet is closely related to continuous-depth message passing neural networks, but it exhibits critical distinctions. Additionally, we provide a justification for deploying KirchhoffNet on an actual electronic circuit, enabling it to complete forward inference in \(1/f\) seconds, regardless of the number of model parameters. We design an example KirchoffNet architecture and achieve a 98.86% test accuracy on the MNIST dataset. Our future endeavors will involve testing it on other common datasets and tasks. ## Acknowledgements This work is subject to change. We welcome any feedback. Zhengqi Gao would like to thank Zihui Xue (UT-Austin) for her support on computational matters. Figure 3: Top row: An example of the input \(28\times 28\) image on the left and the learned first 784 nodal voltage values at \(t=1\) reshaped on the right. Bottom row: Evolution of the last 10 nodal voltages (after applying Softmax) over time. Except for the probability corresponding to the 2nd class, all others gradually reduce to zero. Figure 2: Left: The schematic of a composite device made up of a conductance, a current source, and a one-sided switch, to realize the function \(g\) shown in Eq. (6). Right: The current-voltage relation is plotted. Without the switch, the current-voltage relation follows a linear pattern, as indicated by the dashed blue line. Adding the one-sided switch truncates the relation into the required red line.
2302.08415
Temporal Graph Neural Networks for Irregular Data
This paper proposes a temporal graph neural network model for forecasting of graph-structured irregularly observed time series. Our TGNN4I model is designed to handle both irregular time steps and partial observations of the graph. This is achieved by introducing a time-continuous latent state in each node, following a linear Ordinary Differential Equation (ODE) defined by the output of a Gated Recurrent Unit (GRU). The ODE has an explicit solution as a combination of exponential decay and periodic dynamics. Observations in the graph neighborhood are taken into account by integrating graph neural network layers in both the GRU state update and predictive model. The time-continuous dynamics additionally enable the model to make predictions at arbitrary time steps. We propose a loss function that leverages this and allows for training the model for forecasting over different time horizons. Experiments on simulated data and real-world data from traffic and climate modeling validate the usefulness of both the graph structure and time-continuous dynamics in settings with irregular observations.
Joel Oskarsson, Per Sidén, Fredrik Lindsten
2023-02-16T16:47:55Z
http://arxiv.org/abs/2302.08415v1
# Temporal Graph Neural Networks for Irregular Data ###### Abstract This paper proposes a temporal graph neural network model for forecasting of graph-structured irregularly observed time series. Our TGNN4I model is designed to handle both irregular time steps and partial observations of the graph. This is achieved by introducing a time-continuous latent state in each node, following a linear Ordinary Differential Equation (ODE) defined by the output of a Gated Recurrent Unit (GRU). The ODE has an explicit solution as a combination of exponential decay and periodic dynamics. Observations in the graph neighborhood are taken into account by integrating graph neural network layers in both the GRU state update and predictive model. The time-continuous dynamics additionally enable the model to make predictions at arbitrary time steps. We propose a loss function that leverages this and allows for training the model for forecasting over different time horizons. Experiments on simulated data and real-world data from traffic and climate modeling validate the usefulness of both the graph structure and time-continuous dynamics in settings with irregular observations. ## 1 Introduction Many real-world systems can be modeled as graphs. When data about such systems is collected over time, the resulting time series has additional structure induced by the graph topology. Examples of such temporal graph data is the traffic speed in the road network (Li et al., 2018) and the spread of disease in different regions (Rozemberczki et al., 2021). Building accurate machine learning models in this setting requires taking both the temporal and graph structure into account. While many works have studied the problem of modeling temporal graph data (Wu et al., 2020), these approaches generally assume a constant sampling rate and no missing observations. In real data it is not uncommon to have irregular or missing observations due to non-synchronous measurements or errors in the data collection process. Dealing with such irregularities is especially challenging in the graph setting, as node observations are heavily interdependent. While observations in one node could be modeled using existing approaches for irregular time series (Rubanova et al., 2019; Schirmer et al., 2022), the situation becomes complicated when irregular observations in different nodes occur at different times. In this paper we tackle two kinds of irregular observations in graph-structured time series: (1) irregularly spaced observation times, and (2) only a subset of nodes being observed at each time point. We propose the TGNN4I model for time series forecasting. The model uses a time-continuous latent state in each node, which allows for predictions to be made at any time point. The latent dynamics are motivated by a linear Ordinary Differential Equation (ODE) formulation, which has a closed form solution. This ODE solution corresponds to an exponential decay (Che et al., 2018) together with an optional periodic component. New observations are incorporated into the state by a Gated Recurrent Unit (GRU) (Cho et al., 2014). Interactions between the nodes are captured by integrating Graph Neural Network (GNN) layers both in the latent state updates and predictive model. To train our model we introduce a loss function that takes into account the time-continuous model formulation and irregularity in the data. We evaluate the model on forecasting problems using traffic and climate data of varying degrees of irregularity and a simulated dataset of periodic signals. ## 2 Related Work GNNs are deep learning models for graph-structured data (Gilmer et al., 2017; Wu et al., 2020). By learning representations of nodes, edges or entire graphs GNNs can be used for many different machine learning tasks. These models have been successfully applied to diverse areas such as weather forecasting (Lam et al., 2022) molecule generation (Zang and Wang, 2020) and video classification (Kosman and Di Castro, 2022). Temporal GNNs model also time-varying signals in the graph. This extension to graph-structured time series is achieved by combining GNN layers with recurrent (Li et al., 2018), convolutional (Wu et al., 2019; Yu et al., 2018) or attention (Guo et al., 2019) architectures. While the graph is commonly assumed to be known a priori, some approaches also explore learning the graph structure jointly with the temporal GNN model (Zhang et al., 2022; Wu et al., 2020). Time series forecasting is a well-studied problem and a vast amount of methods exist in the literature. Traditional methods in the area include ARIMA models, vector auto-regression and Gaussian Processes (Box et al., 2015; Roberts et al., 2013). Many deep learning approaches have also been applied to time series forecasting. This includes Recurrent Neural Networks (RNNs) (Lazzeri, 2020), temporal convolutional neural networks (Chen et al., 2020) and Transformers (Giuliari et al., 2021). The latent state of an RNN can be extended to continuous time by letting the state decay exponentially in between observations (Che et al., 2018). Such decay mechanisms have been used for modeling data with missing observations (Che et al., 2018), doing imputation (Cao et al., 2018) and parametrizing point processes (Mei and Eisner, 2017). Another way to define time-continuous states is by learning a more general ODE. In neural ODE models (Chen et al., 2018; Kidger, 2021) the latent state is the solution to an ODE defined by a neural network. Neural ODEs have been successfully applied to irregular time series (Rubanova et al., 2019) and can be used to define the dynamics of temporal GNNs (Fang et al., 2021; Poli et al., 2021). Poli et al. (2021) use such a model for graph-structured time series with irregular time steps, but consider the full graph to be observed at each observation time. Also the GraphCON framework of Rusch et al. (2022) combines GNNs with a second order system of ODEs. In GraphCON the time-axis of the ODE is however aligned with the layers of the GNN. This makes the framework suitable for node- and graph-level predictive tasks, rather than time-series modeling. Another related body of work is concerned with using GNNs for data imputation in time series (Cini et al., 2022; Gordon et al., 2021; Omidshafei et al., 2022). These methods generally do not assume that the time series come with some known graph structure. Instead, the GNN is defined on some graph specifically constructed for the purpose of performing imputation. The closest work to ours found in the literature is the LG-ODE model of Huang et al. (2020). They consider the same types of irregularities, but are motivated more by a multi-agent systems perspective. LG-ODE is based on an encoder-decoder architecture and trained by maximizing the Evidence Lower Bound (ELBO). The encoder builds a spatio-temporal graph of observations and aggregates information using an attention mechanism (Vaswani et al., 2017). The decoder then extrapolates to future times by solving a neural ODE. The encoder-decoder setup differs from our TGNN4I model that sequentially incorporates observations and auto-regressively makes predictions at every time point. Because of the multi-agent motivation Huang et al. (2020) are also more focused on smaller graphs with few interacting entities, but longer forecasting horizons. An extended version of LG-ODE, called CG-ODE (Huang et al., 2021), also aims to learn the graph structure in the form of a weighted adjacency matrix. ## 3 A Temporal GNN for Irregular Observations ### Setting Consider a directed or undirected graph \(\mathcal{G}=(V,E)\) with node set \(V\) and edge set \(E\). Let \(\{t_{i}\}_{i=1}^{N_{t}}\) be a set of (possibly irregular) time points s.t. \(0<t_{1}<\cdots<t_{N_{t}}\). We will here present our model for a single time series, but in general we have a dataset containing multiple time series. Let \(\mathcal{O}_{i}\subseteq V\) be the set of nodes observed at time \(t_{i}\). If \(n\in\mathcal{O}_{i}\), we denote the observed value as \(\mathbf{y}_{i}^{n}\in\mathbb{R}^{d_{y}}\) and any accompanying input features as \(\mathbf{x}_{i}^{n}\in\mathbb{R}^{d_{x}}\). We let \(\mathbf{y}_{i}^{n}=\mathbf{x}_{i}^{n}=\mathbf{0}\) if \(n\notin\mathcal{O}_{i}\). Note that this general setting encompasses a spectrum of irregularity, from single node observations (\(|\mathcal{O}_{i}|=1\)\(\forall i\)) to fully observed graphs (\(\mathcal{O}_{i}=V\)\(\forall i\)). A table of notation is given in appendix A. The problem we consider is that of forecasting. At future time points we want to predict the value at each node, given all earlier observations. Since observations are irregular and we want to make predictions at arbitrary times, we need to consider models that can make predictions for any time in the future. We consider a model where at each node \(n\) a latent state \(\mathbf{h}^{n}(t)\in\mathbb{R}^{d_{h}}\) evolves over continuous time. We define the dynamics of \(\mathbf{h}^{n}(t)\) by: (1) how \(\mathbf{h}^{n}(t)\) evolves in between observations, and (2) how \(\mathbf{h}^{n}(t)\) is updated when node \(n\) is observed. If node \(n\) is observed at time \(t_{i}\) we incorporate this observation into the latent state using a GRU cell (Cho et al., 2014). This information can then be used for making predictions at future time points. An overview of our model is given in Figure 1. ### Time-continuous Latent Dynamics Consider a time interval \(|t_{i},t_{j}|\) where node \(n\) is not observed. During this interval we define the latent state of node \(n\) by the sum \(\mathbf{h}^{n}(t)=\mathbf{\bar{h}}_{i}^{n}+\mathbf{\bar{h}}^{n}(t)\). The first part \(\mathbf{\bar{h}}_{i}^{n}\) is constant over the time interval, constituting a base level around which the state evolves. The dynamics of \(\mathbf{\bar{h}}^{n}(t)\) are dictated by a linear ODE of the form \[d\tilde{\mathbf{h}}^{n}(t)=A\tilde{\mathbf{h}}^{n}(t)\:dt \tag{1}\] with \(A\in\mathbb{R}^{d_{h}\times d_{h}}\) and initial condition \(\tilde{\mathbf{h}}^{n}(t_{i})=\tilde{\mathbf{h}}_{i}^{n}\). Over this interval the ODE has a closed form solution (Arrowsmith and Place, 1992) given by \[\tilde{\mathbf{h}}^{n}(t)=\exp(\delta_{t}A)\tilde{\mathbf{h}}_{i}^{n} \tag{2}\] where \(\exp\) is the matrix exponential function and we define \(\delta_{t}=t-t_{i}\). Assuming that all eigenvalues of \(A\) are unique, we can use its eigen-decomposition1\(A=Q\Lambda Q^{-1}\) to write Footnote 1: Recall that the eigen-decomposition of a diagonalizable matrix \(A\) is given by \(A=Q\Lambda Q^{-1}\), where \(\Lambda\) is a diagonal matrix containing the eigenvalues of \(A\) and the columns of \(Q\) are the corresponding eigenvectors (Searle and Khuri, 1982). \[\tilde{\mathbf{h}}^{n}(t)=Q\exp(\delta_{t}\Lambda)Q^{-1}\tilde{\mathbf{h}}_{i}^{n}. \tag{3}\] Since \(Q\) contains an eigen-basis of \(\mathbb{R}^{d_{h}}\) it can be viewed as a transition matrix, changing the basis of the latent space. While we could in principle learn \(Q\), we note that the basis of the latent space has no physical interpretation and we can without loss of generality choose it such that \(Q=I\). Next, if we make the assumption that \(A\) has real eigenvalues the resulting dynamics are given by \[\tilde{\mathbf{h}}^{n}(t)=\exp(-\delta_{t}\operatorname{diag}(\mathbf{\omega}_{i}^{n} ))\tilde{\mathbf{h}}_{i}^{n} \tag{4}\] where \(\mathbf{\omega}_{i}^{n}>0\) is a parameter vector representing the negation of the eigenvalues. We arrive at an exponential decay in-between observations, a type of dynamics used with GRU-updates in existing works (Che et al., 2018; Cao et al., 2018). The positive restriction on \(\mathbf{\omega}_{i}^{n}\) ensures the stability of the dynamical system in the limit, which for the complete state means that \(\mathbf{h}^{n}(t_{j})\to\tilde{\mathbf{h}}_{i}^{n}\) as \(t_{j}\to\infty\). On the other hand, if we instead allow \(A\) to have complex eigenvalues we can decompose \(A\) using the real Jordan form (Horn and Johnson, 2012). This makes \(\Lambda\) block-diagonal with \(2\times 2\) blocks \[C_{j}=\left[\begin{array}{cc}a_{j}&-b_{j}\\ b_{j}&a_{j}\end{array}\right] \tag{5}\] corresponding to complex conjugate eigenvalues \(a_{j}\pm b_{j}i\). If we compute the matrix exponential for this \(\Lambda\) we end up with a combination of exponential decay and periodic dynamics (Arrowsmith and Place, 1992). The solution gives dynamics that couple each pair of dimensions as \[\begin{split}\left[\tilde{\mathbf{h}}^{n}(t)\right]_{j:j+1}& =\exp(\delta_{t}a_{j})\times\\ &\left[\begin{array}{cc}\cos(b_{j}\delta_{t})&-\sin(b_{j}\delta_{t})\\ \sin(b_{j}\delta_{t})&\cos(b_{j}\delta_{t})\end{array}\right]\left[\tilde{ \mathbf{h}}_{i}^{n}\right]_{j:j+1}.\end{split} \tag{6}\] We parametrize also these dynamics with \(\mathbf{\omega}_{i}^{n}=[-a_{1},-a_{3},\ldots,-a_{d_{h}-1},b_{1},b_{3},\ldots,b_{ d_{h}-1}]^{\intercal}>0\). Note that when \(b_{j}\to 0\) these periodic dynamics reduce to the exponential decay in Eq. 4, but with the parameter for \(-a_{j}\) shared across pairs of dimensions. We consider both the exponential decay and the more general periodic dynamics as options for our model. The periodic dynamics can naturally be seen as advantageous for modeling periodic data, as we will explore empirically in section 4.4. ### Incorporating Observations from the Graph When node \(n\) is observed at time \(t_{i}\) the observation is incorporated into the latent state by a GRU cell, incurring an Figure 1: **Left:** Example graph with four nodes and their latent states, here one-dimensional for illustration purposes. Node observations are indicated with \(\bigcirc\). **Right:** Schematic diagram of the GRU update and predictive model \(g\). Everything inside the shaded area happens instantaneously at time \(t_{i}\). The GRU cell outputs the new initial state \(\tilde{\mathbf{h}}_{i}^{n}+\tilde{\mathbf{h}}_{i}^{n}\), the static component \(\tilde{\mathbf{h}}_{i}^{n}\) and the ODE parameters \(\mathbf{\omega}_{i}^{n}\). The parameters \(\mathbf{\omega}_{i}^{n}\) define the dynamics until the next observation. The prediction \(\tilde{\mathbf{g}}_{i}^{n}\) here is based on information from all earlier time points. instantaneous jump in the latent dynamics to a new value \(\mathbf{\bar{R}}_{i}^{n}+\mathbf{\bar{h}}_{i}^{n}\). Inspired by the continuous-time LSTM of Mei and Eisner (2017), we extend the GRU cell to output also the parameters \(\mathbf{\omega}_{i}^{n}\), which define the dynamics of \(\mathbf{\bar{h}}^{n}(n)\) for the next time interval. So far we have considered each node of the graph as separate entities, but in a graph-based system future observations of a node can depend on the history of the entire graph. To capture this we let the state of each node depend on observations and states in its graph neighborhood. This is achieved by introducing GNNs Gilmer et al. (2017); Wu et al. (2020) in the GRU update Zhao et al. (2018). We replace matrix multiplications with GNNs, taking inputs both from the node \(n\) itself and from its neighborhood \(\mathcal{N}(n)=\{m|(m,n)\in E\}\). The type of GNN we use is a simple version of a message passing neural network Gilmer et al. (2017), defined as \[\begin{split}\mathrm{GNN}&\Big{(}\mathbf{h}^{n},\{\mathbf{h }^{m}\}_{m\in\mathcal{N}(n)}\Big{)}\\ &=W_{1}\mathbf{h}^{n}+\frac{1}{|\mathcal{N}(n)|}\sum_{m\in\mathcal{N} (n)}\!\!\!e_{m,n}W_{2}\mathbf{h}^{m}\end{split} \tag{7}\] where the matrices \(W_{1},W_{2}\) are learnable parameters shared among all nodes and \(e_{m,n}\) is an edge weight associated with the edge \((m,n)\). The use of edge weights allows for incorporating prior information about the strength of connections in the graph. We can additionally stack multiple such GNN layers, append fully-connected layers and include non-linear activation functions in between. The inclusion of multiple GNN layers makes the GRU update dependent on a larger graph neighborhood. In a standard GRU cell the input and previous state are first mapped to three new representations using matrices \(U\) and \(W\)Cho et al. (2014). These representations are then used to compute the state update. In our GNN-based GRU update the matrices are replaced by GNNs and we require seven such intermediate representations to update \(\mathbf{\hat{h}}_{i}^{n}\), \(\mathbf{\bar{h}}_{i}^{n}\) and \(\mathbf{\omega}_{i}^{n}\), as shown below. The node states are combined as \[[\mathbf{u}_{i,1}^{n},\dots,\mathbf{u}_{i,7}^{n}]^{\intercal}=\mathrm{GNN}^{U}\Big{(} \mathbf{h}^{n}(t_{i}),\{\mathbf{h}^{m}(t_{i})\}_{m\in\mathcal{N}(n)}\Big{)} \tag{8}\] where the resulting vector is split into seven equally sized chunks. Another GNN is then used for the combined observations and input features \(\mathbf{\tilde{x}}_{i}^{n}=[\mathbf{y}_{i}^{n},\mathbf{x}_{i}^{n}]^{\intercal}\), \[[\mathbf{v}_{i,1}^{n},\dots,\mathbf{v}_{i,7}^{n}]^{\intercal}=\mathrm{GNN}^{W}\Big{(} \mathbf{\tilde{x}}_{i}^{n},\{\mathbf{\tilde{x}}_{i}^{m}\}_{m\in\mathcal{N}(n)}\Big{)}. \tag{9}\] Note that while all nodes might not be observed at time \(t_{i}\), \(\mathbf{\tilde{x}}_{i}^{m}=\mathbf{0}\) for any unobserved \(m\) and thus do not contribute to the sum over neighbors in the GNN. With the combined information from the graph neighborhood the full GRU update is computed as \[\mathbf{r}_{i}^{n} =\sigma(\mathbf{v}_{i,1}^{n}+\mathbf{u}_{i,1}^{n}+\mathbf{b}_{1}) \tag{10a}\] \[\mathbf{z}_{i}^{n} =\sigma(\mathbf{v}_{i,2}^{n}+\mathbf{u}_{i,2}^{n}+\mathbf{b}_{2})\] (10b) \[\mathbf{q}_{i}^{n} =\tanh(\mathbf{v}_{i,3}^{n}+(\mathbf{r}_{i}^{n}\odot\mathbf{u}_{i,3}^{n})+\bm {b}_{3})\] (10c) \[\mathbf{\bar{h}}_{i}^{n}+\mathbf{\hat{h}}_{i}^{n} =(\mathbf{1}-\mathbf{z}_{i}^{n})\odot\mathbf{h}^{n}(t_{i})+\mathbf{z}_{i}^{n} \odot\mathbf{q}_{i}^{n}\] (10d) \[\mathbf{\bar{r}}_{i}^{n} =\sigma(\mathbf{v}_{i,4}^{n}+\mathbf{u}_{i,4}^{n}+\mathbf{b}_{4})\] (11a) \[\mathbf{\bar{z}}_{i}^{n} =\sigma(\mathbf{v}_{i,5}^{n}+\mathbf{u}_{i,5}^{n}+\mathbf{b}_{5})\] (11b) \[\mathbf{\bar{q}}_{i}^{n} =\tanh(\mathbf{v}_{i,6}^{n}+(\mathbf{\bar{r}}_{i}^{n}\odot\mathbf{u}_{i,6}^{n} )+\mathbf{b}_{6})\] (11c) \[\mathbf{\bar{h}}_{i}^{n} =(\mathbf{1}-\mathbf{\bar{z}}_{i}^{n})\odot\mathbf{\bar{h}}_{i-1}^{n}+\mathbf{\bar {z}}_{i}^{n}\odot\mathbf{\bar{q}}_{i}^{n}\] (11d) \[\mathbf{\omega}_{i}^{n} =\log(\mathbf{1}+\exp(\mathbf{v}_{i,7}^{n}+\mathbf{u}_{i,7}^{n}+\mathbf{b}_{7})) \tag{12}\] where \(\sigma\) is the sigmoid function and \(\mathbf{b}_{1}\)-\(\mathbf{b}_{7}\) learnable bias parameters. In Eq. 11d we let \(\mathbf{\bar{h}}_{i}^{n}=\mathbf{\bar{h}}_{k-1}^{n}\) if \(n\notin\mathcal{O}_{k}\). Eq. 10a-10d correspond to one GRU update, using the decayed state \(\mathbf{h}^{n}(t_{i})\). Eq. 11a-11d define a separate GRU update, but for the decay target \(\mathbf{\bar{h}}_{i}^{n}\). Note that we get \(\mathbf{\hat{h}}_{i}^{n}\), the initial value of \(\mathbf{\tilde{h}}^{n}(t)\), implicitly from the difference between Eq. 10d and Eq. 11d. Finally Eq. 12 computes the parameters \(\mathbf{\omega}_{i}^{n}\) defining the dynamics of \(\mathbf{\bar{h}}^{n}(t)\) up until the next observation of node \(n\). The parameters of the GRU cell are shared for all nodes in the graph. To capture any node-specific properties we parametrize initial states \(\mathbf{h}^{n}(0)\) separately for all nodes and learn these jointly with the rest of the model. ### Predictions The time-continuous dynamics ensure that there is a well-defined latent state \(\mathbf{h}^{n}(t)\) in each node at each time point \(t\). The value of the time series can then be predicted at any time by applying a mapping \(g\colon\mathbb{R}^{d_{h}}\to\mathbb{R}^{d_{y}}\) from this latent state to the prediction \(\mathbf{\tilde{y}}_{j}^{n}\). The addition of GNNs into the GRU update makes the latent state dynamics of each node dependent on historical observations in its neighborhood. However, since the GRU updates happen only when a node is observed, information from observed neighbors might not be incorporated immediately in the latent state. Consider three consecutive time points \(t_{i}<t_{i+1}<t_{i+2}\) s.t. \(n\in\mathcal{O}_{i}\), \(n\notin\mathcal{O}_{i+1}\). Then any observation \(\mathbf{y}_{i+1}^{m}\) for \(m\in\mathcal{O}_{i+1}\cap\mathcal{N}(n)\) will not be taken into account by the model for the prediction \(\mathbf{\hat{y}}_{i+2}^{n}\), as that prediction is based on a latent state with only information from time \(t_{i}\). To remedy this we choose also the predictive model \(g\) to contain one or more GNN layers, \[\mathbf{\tilde{y}}_{j}^{n}=\mathrm{GNN}^{g}\Big{(}[\mathbf{h}^{n}(t_{j}),\mathbf{x}_{j}^{n}]^ {\intercal},\big{\{}[\mathbf{h}^{n}(t_{j}),\mathbf{x}_{j}^{m}]^{\intercal}\big{\}}_{m\in \mathcal{N}(n)}\Big{)}. \tag{13}\] This way \(g\) takes the latent states and input features of the whole neighborhood into account for prediction. We name the full proposed model Temporal Graph Neural Network for Irregular data (**TGNN4I**). ### Loss Function In order to make predictions for arbitrary future time points we introduce a suitable loss function based on the time-continuous nature of the model. Let \(\hat{\mathbf{y}}_{i\to j}^{m}\) be the prediction for node \(m\) at time \(t_{j}\), based on observations of all nodes at times \(t\leq t_{i}\). Define also a time-continuous weighting function \(w\colon\mathbb{R}^{+}\to\mathbb{R}^{+}\) and the set \(\tau_{m,i}=\{j:m\in\mathcal{O}_{j}\wedge i<j\}\) containing the indices of all times after \(t_{i}\) where node \(m\) is observed. We do not include predictions from the first \(N_{\text{init}}\) time steps in the loss, treating this as a short warm-up phase. The loss function for one graph-structured time series is then \[\mathcal{L}_{\ell}=\frac{1}{N_{\text{obs}}}\sum_{m\in V}\sum_{i=N_{\text{init }}+1}^{N_{t}}\sum_{j\in\tau_{m,i}}\frac{\ell\big{(}\hat{\mathbf{y}}_{i\to j}^{m}, \mathbf{y}_{j}^{m}\big{)}w(t_{j}-t_{i})}{j-N_{\text{init}}-1} \tag{14}\] where \(\ell\) is any loss function for a single observation and \(N_{\text{obs}}=\sum_{i=N_{\text{init}}+2}^{N_{t}}\lvert\mathcal{O}_{i}\rvert\) the total number of node observations. We use \(\mathcal{L}_{\text{MSE}}\) with Mean Squared Error (MSE) as \(\ell\), but the framework is fully compatible with other loss functions as well. This includes general probabilistic predictions with a negative log-likelihood loss. Dividing by \(j-N_{\text{init}}-1\), the number of times observation \(j\) has been predicted, guarantees that later observations are not given a higher total weight. The weighting function \(w\) allows for specifying which time-horizons that should be prioritized by the model. This choice is highly application-dependent and should capture which predictions that are of interest when later deploying the model in some real-world setting. If we care about all time horizons, but want to prioritize predictions close in time, a suitable choice could be \(w(\Delta_{t})=\exp\bigl{(}-\frac{\Delta_{t}}{\Omega}\bigr{)}\). It might also be desirable to focus predictive capabilities on a specific \(\Delta_{t}\). If we want predictions around \(\Delta_{t}=\mu\) to be prioritized we can for instance use a Gaussian kernel as \[w(\Delta_{t})=\exp\biggl{(}-\Bigl{(}\tfrac{\Delta_{t}-\mu}{\Omega}\Bigr{)}^{2 }\biggr{)}. \tag{15}\] A limitation of the proposed loss function is the quadratic scaling in the number of time steps, as predictions are made from all times to all future observations. This especially requires large amounts of memory for nodes that are observed at many time steps. However, for many sensible choices of \(w\) predictions far into the future have a close to \(0\) impact on the loss. In practice we can utilize this to approximate \(\mathcal{L}_{\ell}\) by only making predictions \(N_{\text{max}}\) time steps into the future. This approximation explicitly corresponds to setting \(\tau_{m,i}=\{j:m\in\mathcal{O}_{j}\wedge i<j\leq i+N_{\text{max}}\}\) and changing the denominator in Eq. 14 to \(\min(N_{\text{max}},j-N_{\text{init}}-1)\). Alternatively, we can select a weight function with finite support which implies that many terms in Eq. 14 will be exactly zero. This does however require more bookkeeping than the aforementioned truncation method. ## 4 Experiments The TGNN4I model was implemented2 using PyTorch and PyG (Fey and Lenssen, 2019). We evaluate the model on a number of different datasets. See appendix D and E for details on the pre-processing and experimental setups used. As the loss function \(\mathcal{L}_{\text{MSE}}\) captures errors throughout an entire time series we adopt this also as our evaluation metric. Given that the loss weighting \(w\) used for training accurately represents how we value predictions at different time horizons, it is natural to use the same choice for evaluation. In our experiments we rescale each time series so that \(t\in[0,1]\) and use \(w(\Delta_{t})=\exp\bigl{(}-\tfrac{\Delta_{t}}{0.04}\bigr{)}\). By inspecting \(w\) and the time steps in the data we also choose a suitable \(N_{\text{max}}=10\). Footnote 2: Our code and datasets are available at [https://github.com/joeloskarsson/tgnn4i](https://github.com/joeloskarsson/tgnn4i). We consider three versions of our TGNN4I model: (**static**) with a constant latent state in-between observations (\(\tilde{\mathbf{h}}^{n}(t)=\tilde{\mathbf{h}}_{i}^{n}\ \forall t\!\in\!\lvert t_{i},t_{j}\rvert\)), (**exponential**) with the exponential decay dynamics from Eq. 4 and (**periodic**) with the combined decay and periodic dynamics from Eq. 6. In all our experiments the training time of a single model on an NVIDIA A100 GPU is less than an hour and for the smallest dataset (METR-LA) not more than 20 minutes. ### Baselines We compare TGNN4I to multiple baseline models. As a simple starting point we consider a model that always predicts the last observed value in each node for all future time points (**Predict Previous**). Che et al. (2018) propose the GRU-D model for irregular time series, which we extend with our parametrization of the exponential decay and include as a baseline. GRU-D does not use the graph structure explicitly, so there are two ways to adapt this model to our setting. We can view the entire graph-structured time series as one series with \((\lvert V\rvert d_{y})\)-dimensional vectors at each time step (**GRU-D (joint)**). Alternatively, we can view the time series in each node as independent (**GRU-D (node)**), which is essentially the same as TGNN4I with all edges in the graph removed. Two **Transformer** baselines are also included, used in the same (joint) and (node) configurations. In these models the irregular observations are handled through attention masks and the use of timestamps in the sinusoidal positional encodings. We also compare against the **LG-ODE** model of Huang et al. (2020) using the code provided by the authors. We follow their proposed training procedure, where the model encodes the first half of each time series and has to predict the second half. When computing \(\mathcal{L}_{\text{MSE}}\) using LG-ODE we encode all observations up to \(t_{i}\) and decode from that time point in order to get each \(\hat{\mathbf{y}}_{i\to j}^{m}\). More details on the baseline models are given in appendix C. An attempt was also made to adapt the RAINDROP model of Zhang et al. (2022) to our fore casting setting. We were however unable to get useful predictions without making major changes to the model and it is therefore not included here. ### Traffic Data We experiment on the PEMS-BAY and METR-LA datasets, containing traffic speed sensor data from the California highway system (Li et al., 2018; Chen et al., 2001). To be able to control the degree of irregularity, we start from regularly sampled data and choose subsets of observations. We use the versions of the datasets pre-processed by Li et al. (2018). Each dataset is split up into time series of 288 observations (1 day). PEMS-BAY contains 180 such time series with 325 nodes and METR-LA 119 time series with 207 nodes. We include the time of day and the time since the node was last observed as input features \(\mathbf{x}_{i}^{n}\). In order to introduce irregularity in the time steps we next subsample each time series by keeping only 72 of the 288 observations. These \(N_{t}=72\) observation times are the same for all nodes. However, from these subsampled time series we furthermore sample subsets containing 25%-100% of all \(N_{t}\times|V|\) individual node observations. This results in irregular observation time points and a fraction of nodes observed at each time. Our additional pre-processing prevents us from a direct comparison with Li et al. (2018), as their method does not handle irregular observations. We report results for both datasets in Table 1 and highlight the best performing models on METR-LA in Figure 2. GRU-D (joint) has a hard time modeling all nodes jointly, often not performing better than the simple Predict Previous baseline. The Transformer models achieve somewhat better results, but still not competitive with TGNN4I. We additionally note that the Transformers can be highly sensitive to the random seed used for initialization, something that we have not observed for other models. Comparing TGNN4I and GRU-D (node) in Figure 2 it can be noted that the importance of using the graph structure increases when there are fewer observations. Out of the different versions of TGNN4I the exponential and periodic dynamics show a clear advantage over the static one, with the largest difference for the most sparsely observed data. We have observed that the periodic models output only low frequencies, resulting in dynamics and results similar to the model with exponential decay. While the periodic dynamics in Eq. 6 have fewer degrees of freedom when reduced to pure exponential decay, this does not seem to hurt the performance in this example. We found that the training time of LG-ODE scales poorly to large graphs, limiting us to only training a single model for each dataset. The predictions are however quite poor, especially on the PEMS-BAY data. While the model seems to learn something more than just predicting the mean, it is not competitive with our TGNN4I model. We believe that the poor performance of LG-ODE can be explained by a combination of multiple things: (1) The LG-ODE model is primarily designed for data with clear continuous underlying dynamics, which might not match this type of traffic data. (2) When the model is trained as proposed by Huang et al. (2020), it can require large amounts of data. For some of the experiments in the original paper 20 000 sequences are used for training, while we use less than 150. (3) The slow training has limited possibilities for exhaustive hyperparameter tuning on our datasets. Training one LG-ODE model on the PEMS-BAY data takes us over 50 hours. ### USHCN Climate Data Irregular and missing observations are common problems in climate data (Schneider, 2001). The United State Historical Climatology Network (USHCN) daily dataset contains over 50 years of measurements of multiple climate variables from sensor stations in the United States (Menne et al., 2015). We use the pre-processing of De Brouwer et al. (2019) to clean and subsample the data. The target variables chosen are minimum and maximum daily temperature (\(T_{\rm min}\) and \(T_{\rm max}\)), which we model as separate datasets. While existing works (De Brouwer et al., 2019; Schirmer et al., 2022) have treated time series from different sensor stations as independent, we model also the spatial correlation by constructing a 10-nearest-neighbor graph using the sensor positions. Each full dataset contains 186 time series of length \(N_{t}=100\) on a graph of 1123 nodes. The pre-processed USHCN data is sparsely observed with only around 5% of potential node observations present. We report results on both datasets in Table 2. Due to the large size of the graph it was not feasible to apply the LG-ODE model here. We note that for these datasets the (joint) baselines clearly outperform the (node) versions. For GRU-D this is the opposite of what we saw in the traffic data. This can be explained by the fact that climate data Figure 2: Test \(\mathcal{L}_{\rm MSE}\) on METR-LA traffic data for the best performing models from Table 1. Shaded areas correspond to 95% confidence intervals based on re-training models with 5 random seeds. has strong spatial dependencies. The (joint) models can to some extent learn to pick up on these, while for (node) no information can flow between nodes. The best results are however achieved by TGNN4I, showing the added benefit of utilizing the spatial graph. ### Synthetic Periodic Data In the previous experiments, using periodic dynamics with TGNN4I has not added any value. Instead, the learned dynamics have been largely similar to just using exponential decay. This should to some extent be expected, as none of the previous datasets show any clear periodic patterns at the considered time scales. To investigate the possible benefits of the periodic dynamics we instead create a synthetic dataset with periodic signals propagating over a graph. The synthetic dataset is based on a randomly sampled directed acyclic graph with 20 nodes. We define a periodic base signal \[\rho^{n}(t)=\sin(\phi^{n}t+\eta^{n}) \tag{16}\] with random parameters \(\phi^{n}\) and \(\eta^{n}\) for each node \(n\). The target signal \(y^{n}(t)\) in each node is then defined through \[\kappa^{n}(t) =\rho^{n}(t)+\frac{0.5}{|\mathcal{N}(n)|}\sum_{m\in\mathcal{N}(n)} \kappa^{m}(t-0.05) \tag{17a}\] \[y^{n}(t) =\kappa^{n}(t)+\epsilon^{n}(t) \tag{17b}\] where \(\epsilon^{n}(t)\) is Gaussian white noise with standard deviation 0.01. The target signal in each node depends on the base signal in the node itself and the signals in neighboring nodes at a time lag of 0.05. To construct one time series we sample \(y^{n}(t)\) at 70 irregular time points on \([0,1]\). In total we sample 200 such time series and keep 50% of the node observations in each. We train versions of the GRU-D (node) and TGNN4I models with different latent dynamics on the synthetic data. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{**PEMS-BAY**} \\ \cline{2-5} **Model** & 25\% & 50\% & 75\% & 100\% \\ \hline Predict Previous & 26.32 & 18.60 & 15.25 & 13.50 \\ GRU-D (joint) & 18.79\(\pm\)0.07 & 18.27\(\pm\)0.10 & 17.93\(\pm\)0.08 & 17.75\(\pm\)0.12 \\ GRU-D (node) & 8.79\(\pm\)0.06 & 6.62\(\pm\)0.02 & 5.82\(\pm\)0.06 & 5.49\(\pm\)0.06 \\ Transformer (joint) & 12.05\(\pm\)1.19 & 13.13\(\pm\)2.59 & 12.21\(\pm\)1.95 & 11.09\(\pm\)1.38 \\ Transformer (node) & 16.49\(\pm\)0.17 & 14.44\(\pm\)0.48 & 13.20\(\pm\)0.56 & 13.16\(\pm\)1.23 \\ LG-ODE & 27.00 & 24.93 & 24.71 & 23.52 \\ TGNN4I (static) & 7.41\(\pm\)0.09 & 5.98\(\pm\)0.07 & 5.29\(\pm\)0.08 & 4.89\(\pm\)0.05 \\ TGNN4I (exponential) & **7.10\(\pm\)0.07** & **5.78\(\pm\)0.05** & 5.23\(\pm\)0.03 & **4.87\(\pm\)0.09** \\ TGNN4I (periodic) & **7.10\(\pm\)0.09** & 5.80\(\pm\)0.08 & **5.22\(\pm\)0.09** & **4.87\(\pm\)0.02** \\ \hline \hline \multicolumn{5}{c}{**METR-LA**} \\ \cline{2-5} **Model** & 25\% & 50\% & 75\% & 100\% \\ \hline Predict Previous & 9.86 & 7.54 & 6.52 & 6.04 \\ GRU-D (joint) & 8.38\(\pm\)0.05 & 8.03\(\pm\)0.04 & 7.89\(\pm\)0.03 & 7.80\(\pm\)0.02 \\ GRU-D (node) & 4.36\(\pm\)0.08 & 3.62\(\pm\)0.07 & 3.28\(\pm\)0.08 & 3.16\(\pm\)0.04 \\ Transformer (joint) & 5.70\(\pm\)1.41 & 7.17\(\pm\)1.66 & 5.95\(\pm\)1.90 & 6.11\(\pm\)1.80 \\ Transformer (node) & 7.01\(\pm\)0.31 & 6.34\(\pm\)0.24 & 5.84\(\pm\)0.23 & 5.96\(\pm\)0.50 \\ LG-ODE & 8.51 & 7.35 & 6.71 & 6.24 \\ TGNN4I (static) & 3.86\(\pm\)0.02 & 3.31\(\pm\)0.02 & 3.03\(\pm\)0.02 & 2.88\(\pm\)0.02 \\ TGNN4I (exponential) & **3.68\(\pm\)0.05** & **3.18\(\pm\)0.03** & **2.97\(\pm\)0.03** & **2.86\(\pm\)0.04** \\ TGNN4I (periodic) & 3.69\(\pm\)0.02 & 3.19\(\pm\)0.04 & 3.01\(\pm\)0.05 & 2.88\(\pm\)0.03 \\ \hline \hline \end{tabular} \end{table} Table 1: Test \(\mathcal{L}_{\text{MSE}}\) (multiplied by \(10^{2}\)) for the traffic datasets with different fractions of node observations. Where applicable we report mean \(\pm\) one standard deviation across 5 runs with different random seeds. The lowest mean \(\mathcal{L}_{\text{MSE}}\) for each dataset and observation percentage is marked in bold. \begin{table} \begin{tabular}{l c c} \hline \hline & \(T_{\min}\) & \(T_{\max}\) \\ \hline Predict Previous & 16.88 & 17.18 \\ GRU-D (joint) & 8.03\(\pm\)0.23 & 7.97\(\pm\)0.19 \\ GRU-D (node) & 13.12\(\pm\)0.03 & 13.67\(\pm\)0.04 \\ Transformer (joint) & 7.36\(\pm\)0.41 & 7.37\(\pm\)0.28 \\ Transformer (node) & 15.68\(\pm\)0.32 & 15.74\(\pm\)0.34 \\ TGNN4I (static) & 6.97\(\pm\)0.05 & 6.86\(\pm\)0.04 \\ TGNN4I (exponential) & **6.72\(\pm\)0.04** & **6.60\(\pm\)0.04** \\ TGNN4I (periodic) & **6.72\(\pm\)0.05** & 6.63\(\pm\)0.03 \\ \hline \hline \end{tabular} \end{table} Table 2: Test \(\mathcal{L}_{\text{MSE}}\) (multiplied by \(10^{2}\)) for the two USHCN climate datasets. Also our other baselines are included for comparison. Results are reported in Table 3. For both GRU-D (node) and TGNN4I we see a large difference between the different types of latent dynamics. The periodic dynamics seem to help the model to keep track of the base signal in the node and its neighborhood in order to achieve accurate future predictions. While this is a synthetic example, periodic behavior is prevalent in much time series data and being able to explicitly model this in the latent state can be highly advantageous. Attempts were made to also train the GRU-D (joint) model on this dataset, but it failed to pick up on any patterns and ended up only predicting a constant value for all nodes and times. ### Loss Weighting To investigate the impact of the loss weighting function \(w\) we trained four TGNN4I models on the subsampled PEMS-BAY dataset with 25% observations. We used exponential dynamics and considered the weighting functions \[w_{1}(\Delta_{t}) =1 \tag{18a}\] \[w_{2}(\Delta_{t}) =\exp\!\left(-\frac{\Delta_{t}}{0.04}\right)\] (18b) \[w_{3}(\Delta_{t}) =\exp\!\left(-\!\left(\frac{\Delta_{t}-0.1}{0.02}\right)^{2}\right)\] (18c) \[w_{4}(\Delta_{t}) =\mathbb{I}_{\{\Delta_{t}\in[0.18,0.22]\}}. \tag{18d}\] Figure 3 shows the test MSE for the trained models at different \(\Delta_{t}\) in the future, as well as plots of the weighting functions. As an example, the prediction \(\hat{\mathbf{y}}_{i\to j}^{m}\) has \(\Delta_{t}=t_{j}-t_{i}\). We note that the choice of \(w\) can have substantial impact on the error of the model at different time horizons. The exponential weighting in \(w_{2}\) makes the model focus heavily on short-term predictions. This results in better predictions for low \(\Delta_{t}\). At the shortest time horizon the exponential weighting yields an 11% improvement over the model trained with constant \(w_{1}\), but this comes at the cost of far higher errors for long-term predictions. Interestingly the \(w_{1}\) model gives better predictions at all time horizons than the models with \(w_{3}\) and \(w_{4}\), which focus on predictions at some specific time ahead. We believe that there can be a feedback effect benefiting the constant weighting, where learning to make good short-term predictions also aid the learning of long-term prediction, for example by finding useful intermediate representations. A drawback of weighting with \(w_{1}\) is however that since the loss never approaches 0 there is no properly motivated choice of \(N_{\text{max}}\) for our loss approximation. As the model trained with \(w_{4}\) still gives good predictions in the interval \([0.18,0.22]\), there can still be practical reasons to choose such a weighting. With this choice we could reduce the innermost sum in Eq. 14 to only those \(j\):s that lie in the interval of interest. ## 5 Discussion We have proposed a temporal GNN model that can handle both irregular time steps and partially observed graphs. By defining latent states in continuous time our model can make predictions for arbitrary time points in the future. In this section we discuss some details and limitations of the approach, and also give some pointers to interesting directions for future work. ### Efficient Implementation In order to efficiently implement the training and inference of TGNN4I there is a key design choice between (1) storing everything in dense matrices and utilizing massively parallel GPU-computations, and (2) utilizing sparse representations in order to avoid computing values that will never be Figure 3: MSE (Top) for predictions at time \(\Delta_{t}\) in the future for models trained with different loss weighting \(w\) (Bottom). The weighting functions are described in Eq. 18a–18d. To compute the MSE all predictions in the test set were binned based on their \(\Delta_{t}\), with bin width 0.02. The bottom subplot also shows a histogram of the number of predictions in each bin. \begin{table} \begin{tabular}{l c c c} \hline \hline & **Static** & **Exponential** & **Periodic** \\ \hline GRU-D (node) & 8.88\(\pm\)0.36 & 3.13\(\pm\)0.06 & 2.81\(\pm\)0.04 \\ TGNN4I & 15.12\(\pm\)0.05 & 2.91\(\pm\)0.17 & **1.95\(\pm\)0.11** \\ \hline Predict Prev. & 27.52 & \\ Transformer (joint) & 23.19\(\pm\)0.38 & \\ Transformer (node) & 15.39\(\pm\)0.05 & \\ LG-ODE & 16.61\(\pm\)0.23 & \\ \hline \hline \end{tabular} \end{table} Table 3: Test \(\mathcal{L}_{\text{MSE}}\) (multiplied by \(10^{2}\)) for synthetic data. used. Which of these is to be preferred depends on the sparsity of observations in the data. Our implementation follows the massively parallel approach, using binary masks for keeping track of \(\mathcal{O}_{i}\) at each time point. In order to scale TGNN4I to massive graphs in real world scenarios it could be interesting to consider a version of the model distributed over multiple machines, perhaps directly connected to sensors producing the input data. A sparse implementation would be strongly preferred for such an extension. ### Linear and Neural ODEs The dynamics of TGNN4I are defined by a linear ODE and additionally restricted by the assumption of unique eigenvalues in \(A\). This has the benefit of a closed form solution that is efficient to compute, but also limits the types of dynamics that can be learned. Our approach can be contrasted with Neural ODEs (Chen et al., 2018), that allow for learning more expressive dynamics. Neural ODEs do however lack closed form solutions and require using numerical solvers (Kidger, 2021). This incurs a trade-off between speed and numerical accuracy. In experiments we have compared TGNN4I with the LG-ODE model (Huang et al., 2020), which uses a Neural ODE decoder. The slow training of the LG-ODE model can to a large extent be attributed to the numerical ODE solver. While more complex latent dynamics can be useful for some datasets, it can also be argued that simpler dynamics can be compensated with a high enough latent dimension \(d_{h}\) and a flexible enough predictive model \(g\)(Schirmer et al., 2022). ### Societal and Sustainability Impact While our contributions are purely methodological, many applications of graph-based and spatio-temporal data analysis have a clear societal impact. Our example applications of traffic and climate modeling both have potential to aid efforts of transforming society in more sustainable directions, such as those described in the United Nations sustainable development goals 11 and 13 (Rolnick et al., 2022; United Nations, 2015). Traffic modeling allows us to both understand travel behavior and predict future demand. This can enable optimizations of transport systems, both improving the experience of travelers and reducing the environmental impact. Integrating machine learning methods with climate modeling has potential to speed up simulations and increase our understanding of the climate around us. However, it is surely also possible to find applications of our method with a damaging impact on society, for example through undesired mass-surveillance. ### Future Work We consider a setting where the graph structure is both known and constant over time. In some practical applications it is not obvious how to construct the graph describing the system. To tackle this problem our model could be combined with approaches for also learning the graph structure (Stankovic et al., 2020; Zhang et al., 2022). Extending our method to dynamic graphs, that evolve over time, would not require any major changes and could be an interesting direction for future experiments. Our focus has been on forecasting, but the model could also be trained for other tasks. The time-continuous latent state in each node could be used for imputing missing observations or performing sequence segmentation. Also classification tasks are possible, either classifying each node separately or the entire graph-structured time series. While our model can produce predictions at arbitrary time points, an extension would be to also predict the time until the next observation occurs. One way to achieve this would be to let the latent state parametrize the intensity of a point process (Mei and Eisner, 2017; Jia and Benson, 2019). Building such point process models on graphs could be an interesting future application of our model. ## Acknowledgments This research is financially supported by the Swedish Research Council via the project _Handling Uncertainty in Machine Learning Systems_ (contract number: 2020-04122), the Excellence Center at Linkoping-Lund in Information Technology (ELLIIT), and the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. The computations were enabled by the Berzelius resource at the National Supercomputer Centre, provided by the Knut and Alice Wallenberg Foundation. All datasets available online were accessed from the Linkoping University network.
2307.11942
DeepMartNet -- A Martingale based Deep Neural Network Learning Algorithm for Eigenvalue/BVP Problems and Optimal Stochastic Controls
In this paper, we propose a neural network learning algorithm for solving eigenvalue problems and boundary value problems (BVPs) for elliptic operators and initial BVPs (IBVPs) of quasi-linear parabolic equations in high dimensions as well as optimal stochastic controls. The method is based on the Martingale property in the stochastic representation for the eigenvalue/BVP/IBVP problems and martingale principle for optimal stochastic controls. A loss function based on the Martingale property can be used for efficient optimization by sampling the stochastic processes associated with the elliptic operators or value process for stochastic controls. The proposed algorithm can be used for eigenvalue problems and BVPs and IBVPs with Dirichlet, Neumann, and Robin boundaries in bounded or unbounded domains and some feedback stochastic control problems.
Wei Cai
2023-07-21T23:51:52Z
http://arxiv.org/abs/2307.11942v3
DeepMartNet - A Martingale based Deep Neural Network Learning Algorithm for Eigenvalue/BVP Problems and Optimal Stochastic Controls1 ###### Abstract In this paper, we propose a neural network learning algorithm for solving eigenvalue problems and boundary value problems (BVPs) for elliptic operators and initial BVPs (IBVPs) of quasi-linear parabolic equations in high dimensions as well as optimal stochastic controls. The method is based on the Martingale property in the stochastic representation for the eigenvalue/BVP/IBVP problems and martingale principle for optimal stochastic controls. A loss function based on the Martingale property can be used for efficient optimization by sampling the stochastic processes associated with the elliptic operators or value process for stochastic controls. The proposed algorithm can be used for eigenvalue problems and BVPs and IBVPs with Dirichlet, Neumann, and Robin boundaries in bounded or unbounded domains and some feedback stochastic control problems. **AMS subject classifications**: 35Q68, 65N99, 68T07, 76M99 ## 1 Introduction Computing eigenvalue and/or eigenfunctions for elliptic operators or solving boundary value problem of PDEs, and optimal stochastic control are among the key tasks for many scientific computing problems, e.g., ground states and band structure calculation in quantum systems, and financial engineering. Neural networks have been recently explored for those tasks. FermitNet [1] is one of leading methods using anti-symmetrized neural network wavefunctions in variational Monte Carlo calculation of eigenvalues. Recently, Han et al [2] developed a diffusion Monte Carlo method using the connection between stochastic process and solution of elliptic equation and the backward Kolmogorov equation to build a loss function for eigen-value calculations. Based on the same connection, DeepBSDE has also been designed to solve high dimensional quasi-linear PDEs [10], which has also been used for stochastic controls [9]. In this paper, we use the Martingale problem for the eigenvalue problems and stochastic controls, and a loss function using fact that the expectation of a Martingale is constant, thus among any time locations where the expectation can be approximated by sampling stochastic processes associated with the elliptic operator and value processes. ## 2 DeepMartNet - a Martingale based neural network First, we will propose a neural network for computing eigenvalue and eigenfunction for elliptic operator in high dimensions as arising from quantum mechanics. It will be apparent that the approach can be applied to solve boundary value problems of elliptic PDEs and initial boundary value problems of quasilinear parabolic PDEs. Consider the following eigenvalue problem \[\mathcal{L}u+V(\mathbf{x})u =\lambda u,\ \ \mathbf{x}\in D\subset R^{d}, \tag{2.1}\] \[\Gamma(u) =0,\ \ \mathbf{x}\in\partial D,\] where the boundary operator could be one of the following three cases, \[\Gamma(u)=\left\{\begin{array}{cc}u&\text{Dirichlet}\\ \frac{\partial u}{\partial n}&\text{Neumann}\\ \frac{\partial u}{\partial n}-cu&\text{Robin}\end{array}\right.,\] a decay condition will be given at \(\infty\) if \(D=R^{d}\), and the differential operator \(L\) is given as \[\mathcal{L}=\mu^{\top}\nabla+\frac{1}{2}Tr(\sigma\sigma^{\top}\nabla\nabla^{ \top}) \tag{2.2}\] and the vector \(\mu\in R^{d}\), matrix \(\sigma_{d\times d}\) can be associated with the drift and diffusion of the following stochastic Ito process \(X_{t}(\omega)\in R^{d}\),\(\omega\in\Omega\) (random sample space) with \(\mathcal{L}\) as its generator \[d\mathbf{X}_{t} =\mu dt+\sigma\cdot d\mathbf{B}_{t} \tag{2.3}\] \[\mathbf{X}_{t} =\mathbf{x}_{0}\in D\] where \(\mathbf{B}_{t}=(B_{t}^{1},\cdots,B_{t}^{d})^{\top}\in R^{d}\) is Brownian motion in \(R^{d}\). ### Dirichlet Eigenvalue Problem By Ito formula, we have \[du(\mathbf{X}_{t})=\mathcal{L}u(\mathbf{X}_{t})dt+\sigma^{\top}\nabla u( \mathbf{X}_{t})d\mathbf{B}_{t}, \tag{2.4}\] i.e., \[u(\mathbf{X}_{t}) = u(x_{0})+\int_{0}^{t}\mathcal{L}u(\mathbf{X}_{s})ds+\int_{0}^{t} \sigma^{\top}\nabla u(\mathbf{X}_{s})d\mathbf{B}_{s} \tag{2.5}\] \[= u(x_{0})+\int_{0}^{t}(\lambda-V(\mathbf{X}_{s}))u(\mathbf{X}_{s}) ds+\int_{0}^{t}\sigma(\mathbf{X}_{s})^{\top}\nabla u(\mathbf{X}_{s})d\mathbf{B}_{s}.\] Due to the fact that the last Ito integral term in (2.5) is a Martingale [6], therefore the following defines Martingale with respect to a \(\mathbf{B}_{t}-\)natural filtration \(\{\mathcal{F}_{t}\}_{t\geq 0}\), \[M_{t}=u(\mathbf{X}_{t})-u(\mathbf{x}_{0})-\int_{0}^{t}(\lambda-V(\mathbf{X}_{ s}))u(\mathbf{X}_{s})ds, \tag{2.6}\] namely, for any \(s<t\), \[E[M_{t}|\mathcal{F}_{s}]=M_{s}, \tag{2.7}\] which implies for any measurable set \(A\in\mathcal{F}_{s}\), \[\int_{A}M_{t}P(d\omega)=\int_{A}E[M_{t}|\mathcal{F}_{s}]P(d\omega)=\int_{A}M_ {s}P(d\omega) \tag{2.8}\] or \[\int_{A}(M_{t}-M_{s})\,P(d\omega)=0, \tag{2.9}\] i.e., \[\int_{\Omega}(M_{t}-M_{s})\,I_{A}(\omega)P(d\omega)=0 \tag{2.10}\] where \(I_{A}(\omega)\) is the indicator function of the set \(A\). In particular, if we take \(A=\Omega\in\mathcal{F}_{s}\) in (2.10), we have \[E[M_{t}-M_{s}]=0. \tag{2.11}\] i.e. the Martingale \(M_{t}\) has a constant expectation. In the case of finite domain \(D\), \(\tau_{\partial D}\) is a stopping time where \(\tau_{\partial D}\) is the first exit time of the process \(X_{t}\) outside \(D\), then \(M_{t/\gamma_{\partial D}}\) is still a Martingale [6], thus \[E[M_{t/\gamma_{\partial D}}-M_{s/\gamma_{\partial D}}]=0. \tag{2.12}\] **Remark 2.1**.: We could define a different generator \(\mathcal{L}\) by not including \(\mu^{\top}\nabla\) in (2.2), then the Martingale in (2.13) will be changed to \[M_{t}^{*}=u(\mathbf{X}_{t})-u(\mathbf{x}_{0})-\int_{0}^{t}(\lambda-\mu^{\top }(\mathbf{X}_{s})\nabla-V(\mathbf{X}_{s}))u(\mathbf{X}_{s})ds, \tag{2.13}\] where the process \(\mathbf{X}_{t}\) is given by \(d\mathbf{X}_{t}=\sigma\cdot d\mathbf{B}_{t}\),instead. * DeepMartNet for eigenvalue \(\lambda\) Let \(u_{\theta}(\mathbf{x})\) be a neural network which will approximate the eigenfunction with \(\theta\) denoting all the weight and bias parameters, for a given time interval \([0,T]\), we define a partition \[0=t_{0}<t_{1}<\cdots<t_{i}<t_{i+1}<\cdots<t_{N}=T,\] and \(M\)-discrete realizations \[\Omega^{\prime}=\{\omega_{m}\}_{m=1}^{M}\subset\Omega \tag{2.14}\] of the Ito process using Euler-Maruyama scheme with \(M\)-realizations of the Brownian motions \(\mathbf{B}_{i}^{(m)}\), \(0\leq m\leq M\), \[\mathbf{X}_{i}^{(m)}(\omega_{m})\sim X(t_{i},\omega_{m}),0\leq i\leq N,\] where \[\mathbf{X}_{i+1}^{(m)} = \mathbf{X}_{i}^{(m)}+\mu(\mathbf{X}_{i}^{(m)})\Delta t_{i}+ \sigma(\mathbf{X}_{i}^{(m)})\cdot\Delta\mathbf{B}_{i}^{(m)},\] \[\mathbf{X}_{0}^{(m)} = \mathbf{x}_{0}\] where \(\Delta t_{i}=t_{i+1}-t_{i}\), \[\Delta\mathbf{B}_{i}^{(m)}=\mathbf{B}_{i+1}^{(m)}-\mathbf{B}_{i}^{(m)}.\] We will build the loss function \(l(\theta,\lambda)\) for the eigenfunction neural network \(u_{\theta}(\mathbf{x})\) and the eigenvalue \(\lambda\) using the Martingale property (2.9) and the M-realization of the Ito diffusion (2.3). For each \(t_{i}\), we randomly take a subset of \(A_{i}\subset\Omega^{\prime}\) with a uniform sampling (without replacement), corresponding to the mini-batch in computing the stochastic gradient for the empirical training loss, we should have \[\int_{A_{i}}\left(M_{t_{i+1}}-M_{t_{i}}\right)P(d\omega)=0, \tag{2.15}\] which gives an approximate identity for the eigenfunction \(u(\mathbf{X}_{t})\) and eigenvalue \(\lambda\) using the \(A_{i}-\)ensemble average, \[\frac{1}{|A_{i}|}\sum_{m=1}^{|A_{i}|}\left(u(\mathbf{X}_{i+1}^{(m)})-u( \mathbf{X}_{i}^{(m)})-(\lambda-V(\mathbf{X}_{i}^{(m)}))u(\mathbf{X}_{i}^{(m)} )\Delta t_{i}\right)\doteq 0,\] with \(|A_{i}|\) being the number of samples in \(A_{i}\) (i..e. size of mini-batch), \(\mathbf{X}_{i}^{(m)}=\mathbf{X}_{i}^{(m)}(t_{i},\omega_{m})\), \(\omega_{m}\in A_{i}\), suggesting a loss function, to be used for some epoch(s) of training with a given selection of \(A_{i}\)'s, in the following form \[l(\theta,\lambda) = l_{\mathbf{x}_{0}}(\theta,\lambda)=\frac{1}{N}\sum_{i=0}^{N-1} \left(\frac{1}{|A_{i}|}\sum_{m=1}^{|A_{i}|}\left(u_{\theta}(\mathbf{X}_{i+1}^{(m )})-u_{\theta}(\mathbf{X}_{i}^{(m)})-(\lambda-V(\mathbf{X}_{i}^{(m)}))u_{ \theta}(\mathbf{X}_{i}^{(m)})\Delta t_{i}\right)\right)^{2}\] \[+\beta l_{reg}(\theta),\] where the subscript in \(l_{\mathbf{x}_{0}}\) indicates all the sampled paths of the stochastic process starts from \(\mathbf{x}_{0}\), and an regularization term \(l_{reg}(\theta)\) is added for specific needs to be discussed later. The DeepMartNet approximation for the eigenvalue \(\lambda\sim\lambda^{*}\) will be obtained by minimizing the loss function \(l(\theta,\lambda)\) using stochastic gradient decent, \[(\theta^{*},\lambda^{*})=\arg\min l(\theta,\lambda). \tag{2.17}\] **Remark 2.2**.: **(Mini-batch in SGD training and Martingale property)** Due to the equivalence between (2.9) and (2.7), the loss function defined above ensures that \(M_{t}\) of (2.6) for \(u_{\theta}(\mathbf{x})\) will be a Martingale approximately if the mini-batch \(A_{i}\) explores all subsets of the sample space \(\Omega^{\prime}\) during the SGD optimization process of the training, and the sample size \(M=|\Omega^{\prime}|\to\infty\),the time step \(\max|\Delta t_{i}|\to 0\), and the training converges. Also, if we take \(A_{i}=\Omega^{\prime}\) for all \(i\), there will be no stochasticity in the gradient calculation for the loss function, we will have a traditional full gradient descent method and the full Martingale property for \(u_{\theta}(\mathbf{x})\) is not enforced either. Therefore, the mini-batch practice in DNN SGD optimization corresponds perfectly with the Martingale definition (2.7). **Remark 2.3**.: (regularizer \(l_{reg}(\theta)\)). due to non-uniqueness of the eigenvalues, we will need to introduce a constrain if we intend to compute the lowest eigen-value (ground state for quantum systems). The Rayleigh energy can be used for this purpose for zero drift and constant diffusion coefficient \[l_{reg}(\theta)=\int_{\Omega}\left(\nabla^{\top}u_{\theta}\frac{\sigma\sigma ^{\top}}{2}\nabla u_{\theta}+Vu_{\theta}^{2}\right)dx+\gamma\left(\int_{\Omega }u_{\theta}^{2}d\mathbf{x}-1\right)^{2}, \tag{2.18}\] where 1-normalization factor for the eigenfunction is also included and the Rayleigh energy integral can be evaluated with a separate and coarse grid. * DeepMartNet for eigenvalue \(\lambda\) and eigenfunction \(u\) As the loss function in (2.16) only involves paths \(\mathbf{X}_{t}\) starting from a fixed point \(\mathbf{x}_{0}\), it may not be able to explore all the state space of the process, therefore the minimization problem in (2.17) is expected only to produce a good approximation for the eigenvalue. To achieve a good approximation to the eigenfunction as well, we will need to sample the paths of the process \(\mathbf{X}_{t}\) from \(K\)- initial point \(x_{0}^{(k)},1\leq k\leq K\),and define a global loss function \[R(\theta,\lambda)=\frac{1}{K}\sum_{k=1}^{K}l_{x_{0}^{(k)}}(\theta,\lambda), \tag{2.19}\] whose minimizer \((\theta^{*},\lambda^{*})\) is expected to approximate both the eigenfunction and eigenvalue \[u(x)\sim u_{\theta^{*}},\qquad\lambda\sim\lambda^{*},\] where \[(\theta^{*},\lambda^{*})=\argmin l(\theta,\lambda). \tag{2.20}\] ### Neumann and Robin Eigenvalue Problem We will illustrate the idea for the Robin eigenvalue problem for the simple case of Laplacian operator, \[\mathcal{L}=\frac{1}{2}\Delta\] In probabilistic solutions for Neumann and Robin BVPs, reflecting Brownian motion will be needed which will go through specular reflections upon hitting the domain boundary, and a measure of such reflections, the local time of RBM, will be needed. we will introduce the boundary local time \(L(t)\) for reflecting Brownian motion through a Skorohod problem. (**Skorohod problem):** Assume \(D\) is a bounded domain in \(R^{d}\) with a \(C^{2}\) boundary. Let \(f(t)\) be a (continuous) path in \(R^{d}\) with \(f(0)\in\bar{D}\). A pair \((\xi(t),L(t))\) is a solution to the Skorohod problem \(S(f;D)\) if the following conditions are satisfied: 1. \(\xi\) is a path in \(\bar{D}\); 2. \(L(t)\) is a nondecreasing function which increases only when \(\xi\in\partial D\), namely, \[L(t)=\int_{0}^{t}I_{\partial D}(\xi(s))L(ds),\] (2.21) 3. The Skorohod equation holds: \[S(f;D):\qquad\xi(t)=f(t)-\int_{0}^{t}n(\xi(s))L(ds),\] (2.22) where \(n(x)\) stands for the outward unit normal vector at \(x\in\partial D\). For our case that \(f(t)\!=\!B_{t}\), the corresponding \(\xi_{t}\) will be the reflecting Brownian motion (RBM) \(\mathbf{X}_{t}\). As the name suggests, a RBM behaves like a BM as long as its path remains inside the domain \(D\), but it will be reflected back inwardly along the normal direction of the boundary when the path attempts to pass through the boundary. The fact that \(\mathbf{X}_{t}\) is a diffusion process can be proven by using a martingale formulation and showing that \(\mathbf{X}_{t}\) is the solution to the corresponding martingale problem with the Neumann boundary condition [3][4]. Due to the fact that RBM \(\mathbf{X}_{t}\) is a semimartingale [3][4], for which the Ito formula [6] give the following \[u(\mathbf{X}_{t})\!=\!u(x_{0})-\int_{0}^{t}\!cu(\mathbf{X}_{s})dL(s)-\int_{0} ^{t}\!(V(\mathbf{X}_{s})-\lambda)\,u(\mathbf{X}_{s})ds+\int_{0}^{t}\!\nabla u( \mathbf{X}_{s})\!\cdot\!d\mathbf{B}_{s}, \tag{2.23}\] where an additional path integral term involving the local time \(L(s)\) is added compared with (2.5). Again, the last term above being a Martingale, we can define the following Martingale \[M_{t}\!=\!u(\mathbf{X}_{t})-u(x_{0})+\int_{0}^{t}\!cu(\mathbf{X}_{s})dL(s)+ \int_{0}^{t}\!(V(\mathbf{X}_{s})-\lambda)\,u(\mathbf{X}_{s})ds. \tag{2.24}\] Using this Martingale, the DeepMartNet for the Dirichlet eigenvalue problem can be carried out similarly for the Neumann and Robin eigenvalue problems. The sampling of reflecting Brownian motion and the computation of local time \(L(t)\) can be found in [5]. ## 3 Optimal Stochastic control ### Martingale Optimality Principle In this section, we will apply the above DeepMartNet for solving the optimal control of solutions to stochastic differential equations with a finite time horizon \(T\). Let us consider the following SDE, \[d\mathbf{X}_{t}\!=\!\mu(t,\mathbf{X}_{t},u_{t})dt\!+\!\sigma(t,\mathbf{X}_{t} )\!\cdot\!d\mathbf{B}_{t},\ \ 0\!\leq\!t\!\leq\!T \tag{3.1}\] where control \(u_{t}\in\mathcal{U}\), where \(\mathcal{U}\) is the control space consisting of \(\{\mathcal{F}_{t}\}_{t\geq 0}\)-predictable processes taking values in \(U\!\subset\!R^{m}\) and \(\{\mathcal{F}_{t}\}_{t\geq 0}\) is the natural filtration generated by \(\mathbf{B}_{t}\). The running cost of the control problem is a function \[c\!:\!\Omega\times[0,T]\times U\!\to\!R, \tag{3.2}\] and for a feedback control, the dependence of \(c\) on \(\omega\!\in\!\Omega\) will be through the state of the system \(\mathbf{X}_{t}(\omega)\), i.e. \[c(\omega,t,u)\!=\!c(\mathbf{X}_{t}(\omega),t,u), \tag{3.3}\] and the terminal cost is defined by a \(\mathcal{F}_{T}\)-measurable random variable \[\xi(\omega)=\xi(\mathbf{X}_{T}(\omega)) \tag{3.4}\] where an explicit dependence on the \(X_{T}\) is assumed. For a given control \(u\), the total expected cost is then defined by \[J(u)=E_{u}\left[\xi+\int_{[0,T]}c(\mathbf{X}_{t}(\omega),t,u_{t})dt\right] \tag{3.5}\] where the expectation \(E_{u}\) is taken with respect to the measure \(P^{u}\). The optimal control problem is to find a control \(u^{*}\) such that \[u^{*}=\arg\inf_{u\in\mathcal{U}}J(u). \tag{3.6}\] To present the Martingale principle for the optimal control, we define the expected remaining cost for a given control \(u\) \[J(\omega,t,u)=E_{u}\left[\xi(\mathbf{X}_{T}(\omega))+\int_{[t,T]}c(\mathbf{X} _{t}(\omega),t,u_{t})dt|\mathcal{F}_{t}\right] \tag{3.7}\] and a value process \[V_{t}(\omega)=\inf_{u\in\mathcal{U}}J(\omega,t,u),\text{ and }E[V_{0}]=\inf_{u \in\mathcal{U}}J(u), \tag{3.8}\] and a cost process \[M_{t}^{u}(\omega)=\int_{[0,t]}c(\mathbf{X}_{s}(\omega),s,u_{s})ds+V_{t}( \omega). \tag{3.9}\] The Martingale optimality principle is stated in the following theorem [8]. **Theorem 3.1**.: _(Martingale optimality principle) \(M_{t}^{u}\) is a \(P^{u}\)-submartingale. \(M_{t}^{u}\) is a \(P^{u}\)-martingale if and only if control \(u=u^{*}\) (the optimal control),and_ \[E[V_{0}]=E_{u}[M_{0}^{u^{*}}]=\inf_{u\in\mathcal{U}}J(u).\] Moreover, the value process \(V_{t}(\omega)\) satisfies the following backward SDE (BSDE) \[\left\{\begin{array}{c}dV_{t}=-H(t,\mathbf{X}_{t},\mathbf{Z}_{t})dt+\mathbf{ Z}_{t}dB_{t},0\leq t<T\\ V_{T}(\omega)=\xi(\mathbf{X}_{T}(\omega))\end{array}\right., \tag{3.10}\] where the Hamiltanian \[H(t,\mathbf{x},\mathbf{z})=\inf_{u\in\mathcal{U}}f(t,\mathbf{x},\mathbf{z};u)\] and \[f(t,\mathbf{x},\mathbf{z};u) = c(\mathbf{x},t,u)+\mathbf{z}\alpha(t,\mathbf{x},u),\] \[\alpha(t,\mathbf{x},u) = \sigma^{-1}(t,\mathbf{x})\mu(t,\mathbf{x},u).\] From Pardoux-Peng theory [7] on the relation between quasi-linear parabolic equation and backward SDEs, we know that the value process as well as \(Z_{t}(\omega)\) can be expressed in terms of a deterministic function \(v(t,x)\) \[V_{t}(\omega) = v(t,\mathbf{X}_{t}(\omega))\] \[Z_{t}(\omega) = \nabla v(t,\mathbf{X}_{t}(\omega))\sigma(t,\mathbf{X}_{t}(\omega))\] where the value function \(v(t,\mathbf{x})\) satisfies the following Hamilton-Jacobi-Bellman (HJB) equation \[\left\{\begin{array}{cc}0\!=\!\frac{\partial v}{\partial t}(t,\mathbf{x})\! +\!\mathcal{L}v(t,\!\mathbf{x})\!+\!H(t,\!\mathbf{x},\nabla_{x}v\sigma(t, \mathbf{x})),&0\!\leq\!t\!<\!T,\mathbf{x}\!\in\!R^{d}\\ v(T,\mathbf{x})\!=\!\xi(\mathbf{x})\end{array}\right.. \tag{3.11}\] ### DeepMartNet for optimal control \(u^{*}\) and value function \(v(t,\mathbf{x})\) Based on the martingale principle theorem for the optimal feedback control, we can extend DeepMartNet to approximate the optimal control by a neural network \[u_{t}(\omega)\!=\!u_{t}(\mathbf{X}(\omega))\!\sim\!u_{\theta_{1}}(t,\! \mathbf{X}(\omega)), \tag{3.12}\] where \(u_{\theta_{1}}(t,\mathbf{x})\!\in\!C([0,T]\times R^{d}\) ) will be a neural network approximation for a \(d\!+\!1\) dimensional function with network parameters \(\theta_{1}\), and the value function by another network \[v(t,\mathbf{x})\!\sim\!v_{\theta_{2}}(t,\!\mathbf{x}). \tag{3.13}\] The loss function will consist of two parts, one for the control network and one for the value network \[l(\theta_{1},\theta_{2})\!=\!l_{ctr}(\theta_{1})\!+\!l_{val}(\theta_{2})\] where, similar to (2.16), \[l_{ctr}(\theta_{1}) = l_{ctr,\mathbf{x}_{0}}(\theta_{1}) \tag{3.14}\] \[= \frac{1}{N}\sum_{i=0}^{N-1}\frac{1}{|A_{i}|}\sum_{m=1}^{|A_{i}|} \left(c(X_{i_{t}},t_{i},u_{\theta_{1}}(t_{i},\mathbf{X}_{i}^{(m)}))\Delta t_{ i}\!+\!v_{\theta_{2}}(t_{i+1}\mathbf{X}_{i+1}^{(m)})\!-\!v_{\theta_{2}}(t_{i}, \mathbf{X}_{i}^{(m)})\right)^{2}\] and, by using Ito formula for \(v_{\theta_{2}}(t,\!\mathbf{x})\), we can obtain a similar Martingale form for the HJB equation (3.11) and define a similar loss function for the value function \(v(t,\!\mathbf{x})\) as in (2.16) \[\begin{split}& l_{val}(\theta_{2})\!=\!l_{val,\mathbf{x}_{0}}( \theta_{2})\\ =&\frac{1}{N}\sum_{i=0}^{N-1}\left(\frac{1}{|A_{i}|} \sum_{m=1}^{|A_{i}|}\left(\begin{array}{c}v_{\theta_{2}}(t_{i+1},\mathbf{X} _{i+1}^{(m)})\!-\!v_{\theta_{2}}(t_{i},\mathbf{X}_{i}^{(m)})\!+\\ H(t_{i},\mathbf{X}_{i}^{(m)},\nabla_{x}v_{\theta_{2}}(t_{i},\mathbf{X}_{i}^{(m)}) \sigma(t,\mathbf{X}_{i}^{(m)}))\Delta t_{i}\end{array}\right)\right)^{2}\\ +&\beta\frac{1}{M}\sum_{m=1}^{M}(v_{\theta_{2}}(T,\mathbf{X}_{N}^{(m)})\!-\! \xi(\mathbf{X}_{N}^{(m)}))^{2}.\end{split} \tag{3.15}\] Again, for better accuracy globally for the control and value networks, we can define a global loss function with more sampling of the starting points \(\mathbf{x}_{0}^{(k)},1\leq k\leq K\), \[R(\theta_{1},\theta_{2})=\frac{1}{K}\sum_{k=1}^{K}\Big{(}l_{ctr,\mathbf{x}_{0}^{ (k)}}(\theta_{1})+l_{val,\mathbf{x}_{0}^{(k)}}(\theta_{2})\Big{)}. \tag{3.16}\] The above approach requires an accurate result for the value function \(v(t_{i}\mathbf{X}_{i}^{(m)})\) in the region explored by the process \(\mathbf{X}_{i}^{(m)}\), this could pose a challenge to the DeepMartNet. An alternative approach is to use FBSDE based learning algorithm in [11], which has been shown to be able to meet this requirement. ## 4 Conclusion In this paper, we introduce a Martingale based neural network for finding the eigenvalue and eigenfunction for general elliptic operators for general types of boundary conditions, solutions of BVPs and IBVPs of PDEs as well as optimal stochastic controls. Future numerical experiments will be carried out to evaluate the efficiency and accuracy of the proposed algorithm, especially in high dimensions.
2310.17042
StochGradAdam: Accelerating Neural Networks Training with Stochastic Gradient Sampling
In the rapidly advancing domain of deep learning optimization, this paper unveils the StochGradAdam optimizer, a novel adaptation of the well-regarded Adam algorithm. Central to StochGradAdam is its gradient sampling technique. This method not only ensures stable convergence but also leverages the advantages of selective gradient consideration, fostering robust training by potentially mitigating the effects of noisy or outlier data and enhancing the exploration of the loss landscape for more dependable convergence. In both image classification and segmentation tasks, StochGradAdam has demonstrated superior performance compared to the traditional Adam optimizer. By judiciously sampling a subset of gradients at each iteration, the optimizer is optimized for managing intricate models. The paper provides a comprehensive exploration of StochGradAdam's methodology, from its mathematical foundations to bias correction strategies, heralding a promising advancement in deep learning training techniques.
Juyoung Yun
2023-10-25T22:45:31Z
http://arxiv.org/abs/2310.17042v2
# StochGradAdam: Accelerating Neural Networks Training with Stochastic Gradient Sampling ###### Abstract In the rapidly advancing domain of deep learning optimization, this paper unveils the StochGradAdam optimizer, a novel adaptation of the well-regarded Adam algorithm. Central to StochGradAdam is its gradient sampling technique. This method not only ensures stable convergence but also leverages the advantages of selective gradient consideration, fostering robust training by potentially mitigating the effects of noisy or outlier data and enhancing the exploration of the loss landscape for more dependable convergence. In both image classification and segmentation tasks, StochGradAdam has demonstrated superior performance compared to the traditional Adam optimizer. By judiciously sampling a subset of gradients at each iteration, the optimizer is optimized for managing intricate models. The paper provides a comprehensive exploration of StochGradAdam's methodology, from its mathematical foundations to bias correction strategies, heralding a promising advancement in deep learning training techniques. ## 1 Introduction Deep learning, with its ability to model complex relationships and process vast amounts of data, has revolutionized various fields from computer vision to natural language processing [11; 20]. The heart of deep learning lies in the optimization algorithms that tune model parameters to minimize loss and increase accuracy [28]. The choice of an optimizer can significantly influence a model's convergence speed, final performance, and overall stability [3]. In this rapidly evolving arena, we continuously strive for more efficient and powerful optimization techniques. In our pursuit of advancing optimization methodologies, our focus extends beyond merely the architectural intricacies of models. Instead, we place significant emphasis on the mechanisms governing weight updates during the training phase. Renowned optimization algorithms such as Adam [17], RMSProp [36], and Adagrad [9] have traditionally been the cornerstones that have ushered numerous models to achieve exemplary performance. Yet, the evolving landscape of deep learning raises a pertinent question: Is there potential to further enhance these optimization strategies, especially when applied to expansive and intricate neural architectures? To address this query, we introduce StochGradAdam, our novel optimizer that incorporates gradient sampling as a pivotal technique to enhance the accuracy of models. This gradient sampling method not only assists in optimizing the training process but also contributes significantly to the improvement in the model's generalization on unseen data. In our empirical evaluation, we subject Convolutional Neural Networks (CNNs) like ResNet [14], VGG [32], MobileNetV2[29] and Vision Transformer Model(ViT) [8] to a comprehensive array of tests. Our examination extends beyond just test accuracy and loss. We delve deeper, investigating the entropy of the class predictions. Entropy, in this context, measures the uncertainty in the model's predictions across classes [7]. A model with lower entropy exudes confidence in its predictions, while one with higher entropy reflects greater uncertainty. By harnessing the strength of StochGradAdam with gradient sampling, we have observed a decrease in entropy, indicating a more confident prediction by the models. Through studying entropy, we strive to gain a more nuanced understanding of the model's behavior, shedding light on aspects that might remain obscured when solely relying on traditional metrics [38]. ``` 1:Stepsize (Learning rate) \(\alpha\) 2:Decay rates \(\beta_{1},\beta_{2}\in[0,1)\) for the moment estimates 3:Stochastic objective function \(f(\theta)\) with parameters \(\theta\) 4:Initial parameter vector \(\theta_{0}\) 5:sampling rate \(s\) 6:Initialize \(m\) and \(v\) as zero tensors \(\triangleright\) Moment vectors 7:while\(\theta\) not converged do 8:Get gradient \(g\) with respect to the current parameters \(\theta\) 9:Generate a random mask \(mask\) with values drawn uniformly from \([0,1]\) 10:\(grad\_mask\gets mask<s\)\(\triangleright\) Mask gradient based on sampling rate 11:\(grad\_sampled\leftarrow\) where\((grad\_mask,g,0)\) 12:\(\beta_{1\_t}\leftarrow\beta_{1}\times\delta\) 13:\(m_{t}\leftarrow\beta_{1\_t}m+(1-\beta_{1\_t})grad\_sampled\) 14:\(v_{t}\leftarrow\beta_{2}v+(1-\beta_{2})grad\_sampled^{2}\) 15:\(m_{corr\_t}\leftarrow\frac{1-m_{t}}{1-\beta_{1\_t}^{t+1}}\) 16:\(v_{corr\_t}\leftarrow\frac{v_{t}}{1-\beta_{2\_t}^{t+1}}\) 17:\(\theta\leftarrow\theta-\alpha\frac{m_{corr\_t}}{\sqrt{v_{corr\_t}+\epsilon}}\) 18: Update \(m\) with \(m_{t}\) and \(v\) with \(v_{t}\) 19:endwhile 20:return\(\theta\)\(\triangleright\) Updated parameters ``` **Algorithm 1** StochGradAdam, a modified version of the Adam optimizer with random gradient sampling. See section 3 for details explaination for the algorithm of our proposed optimizer. Following these findings, we further elucidate the contribution of StochGradAdam. This optimizer results from rigorous research and embodies state-of-the-art algorithmic principles. Preliminary results suggest that StochGradAdam exhibits performance on par with, if not superior to, existing optimization methods. A detailed explanation of the algorithm and its update rule is provided in section 3. ## 2 Related Works The exploration of gradient-based optimization techniques has been a cornerstone in deep learning research. Over the years, this has given rise to various methodologies that aim to harness the power of gradients in more effective ways. Among them, gradient sampling and techniques with similar goals have piqued the interest of researchers, leading us on a more exhaustive dive into the landscape. Stochastic Gradient Descent (SGD) serves as the foundation for many gradient-based methods. It updates the model's parameters using only a subset of the entire dataset. This inherent introduction of noise is balanced well with computational efficiency and commendable convergence properties [3]. Delving deeper into gradient behavior, Sparse Gradients Techniques emerged from observations that many gradient components might be trivial for model updates despite their existence. Such techniques, therefore, prioritize gradients that exceed certain thresholds, aiming for updates that are sparser yet potentially more informative [39]. Adaptive Sampling techniques, an evolution in gradient sampling, tailor the subset of gradients under consideration by observing the gradients' historical behavior. These techniques operate under the hypothesis that gradients showing significant fluctuations over time might be more pivotal for efficient optimization [42]. Gradient Sampling Methods[6] for Nonsmooth Optimization offer a distinct approach, especially beneficial for tackling nonsmooth problems. They rely on the concept of randomly sampling gradients, a strategy particularly beneficial when the gradient might be challenging to compute or might not exist at all. The RSO (random search optimization) technique by Tripathi and Singh[37] offers a unique perspective on optimization. Instead of relying on direct gradient computations, RSO introduces perturbations to weights and evaluates their impact on the loss function. This approach becomes particularly beneficial in situations where computing the gradient is either intricate or entirely unfeasible. The essence of RSO underscores the notion that for certain problems, venturing into the vicinity of randomly initialized networks without the need for exact gradient computations can be sufficient for optimization. Contrastingly, the StochGradAdam method amalgamates the principles of Gradient Sampling with the renowned ADAM optimizer. While both RSO and Gradient Sampling diverge from traditional gradient-based methods, StochGradAdam's approach is distinct. It capitalizes on gradient information, even if it's sampled, to guide the optimization process. This gradient-centric nature of StochGradAdam allows it to potentially provide more precise weight adjustments. The key differentiation between RSO and StochGradAdam lies in their treatment of gradients: while RSO bypasses them in favor of random perturbations, StochGradAdam harnesses sampled gradients to inform its optimization steps. In this rich tapestry of gradient-based methodologies, our StockGradAdam emerges with distinction. Instead of merely adopting the usual random sampling approach, our method pulsates with adaptive intelligence. It aligns with the training phase's nuances and the inherent gradient variances, balancing exploration and exploitation. Furthermore, its standout feature lies in the harmonious melding of gradient sampling with momentum intricacies. This symbiosis magnifies the collective strengths of both strategies. Moreover, our technique adeptly navigates the terrain of sparse gradients, ensuring precision with each update. The vast landscape of gradient manipulation methods might seem saturated at first glance. However, the introduction and success of StockGradAdam reiterate that there's always room for innovative, impactful strategies in gradient-based optimizations. ## 3 Methodology: StochGradAdam Optimizer The StochGradAdam optimizer is an extension of the Adam optimizer [17], incorporating selective gradient sampling to bolster optimization efficacy. Its principal update rule is: \[\mathbf{w}_{t+1}=\mathbf{w}_{t}-\alpha\frac{m_{\text{corr}_{t}}}{\sqrt{v_{ \text{corr}_{t}}}+\epsilon}, \tag{1}\] where \(\alpha\) symbolizes the learning rate, \(m_{\text{corr}_{t}}\) is the bias-corrected moving average of the gradients, and \(v_{\text{corr}_{t}}\) is the bias-corrected moving average of the squared gradients. The following sections elaborate on the inner workings of this formula. ### Preliminaries Let a trainable parameter be denoted as \(\mathbf{w}\in\mathbb{R}^{d}\), where \(\mathbb{R}^{d}\) is the space of all possible parameters. The optimizer maintains two state variables for each such parameter: \[m(\mathbf{w}) :\text{Moving average of the gradients with respect to }\mathbf{w}.\] \[v(\mathbf{w}) :\text{Moving average of the squared gradients with respect to }\mathbf{w}.\] Furthermore, the hyperparameters are: \[\beta_{1},\beta_{2} \in(0,1):\text{Exponential decay rates}.\] \[\text{decay}\in\mathbb{R}^{+}:\text{Decay multiplier for }\beta_{1}.\] \[\text{\emph{s}}\in(0,1):\text{Probability for gradient sampling}.\] \[\epsilon\in\mathbb{R}^{+}:\text{Constant ensuring numerical stability}.\] ### Gradient Sampling Gradient sampling, in the context of optimization, is a technique where a subset of the gradients is randomly selected during the optimization process. This method not only promotes more robust training by sifting through gradient components, potentially reducing the influence of noisy or outlier data, but also enhances the exploration of the loss landscape, leading to more reliable convergence. Additionally, our approach to gradient sampling is designed to adaptively choose the sampling rate based on training dynamics, ensuring that more relevant gradients are considered during pivotal training phases. #### 3.2.1 Stochastic Mask Generation Given a gradient \(\mathbf{g}\), the objective is to determine whether each component of this gradient should be considered in the update. To this end, a stochastic mask \(\Omega\) is introduced. Each component of \(\Omega\) is independently derived by drawing from the uniform distribution \(\mathcal{U}(0,1)\): \[\Omega_{i}=\begin{cases}1&\text{if }\mathcal{U}(0,1)<s,\\ 0&\text{otherwise},\end{cases} \tag{2}\] for \(i=1,2,\ldots,d\), where \(d\) represents the dimensionality of \(\mathbf{g}\). Here, \(\mathcal{U}(0,1)\) denotes a uniform random variable over the interval [0,1], and \(s\) is a predefined threshold dictating the average portion of gradients to be sampled. #### 3.2.2 Computing the Sampled Gradient With the stochastic mask in hand, the next objective is to compute the sampled gradient, denoted by \(\phi\). This is accomplished by executing an element-wise multiplication between \(\mathbf{g}\) and \(\Omega\): \[\phi_{i}=\Omega_{i}\times g_{i}, \tag{3}\] for \(i=1,2,\ldots,d\). Thus, we get: \[\phi=\Omega\odot\mathbf{g}, \tag{4}\] where \(\odot\) signifies element-wise multiplication, ensuring only the components of the gradient flagged by \(\Omega\) influence the sampled gradient. The underlying idea of gradient sampling is rooted in the belief that not all gradient components are equally informative. By stochastically selecting a subset, one can potentially accelerate the optimization process without sacrificing much in terms of convergence properties. Moreover, this also introduces a form of noise, which can, in some cases, assist in escaping local minima or saddle points in the loss landscape. ### State Updates The StochGradAdam optimizer maintains two state variables, \(m\) and \(v\), representing the moving averages of the gradients and their squared values, respectively. Their iterative updates are influenced by the gradient information and specific hyperparameters. #### 3.3.1 Moving Average of Gradients: \(m\) The moving average of the gradients, \(m\), is updated through an exponential decay mechanism. At each iteration, a part of the previous moving average merges with the current sampled gradient: \[m_{t}=\beta_{1}^{t}m+(1-\beta_{1}^{t})\phi, \tag{5}\] Here, \(\beta_{1}\) signifies the exponential decay rate for the moving average of the gradients [17]. The term \(\beta_{1}^{t}\) showcases the adjusted decay rate at the \(t^{th}\) iteration, defined as: \[\beta_{1}^{t}=\beta_{1}\times\text{decay}. \tag{6}\] The function of \(\beta_{1}\) is to balance the memory of past gradients. A value nearing 1 places more emphasis on preceding gradients, yielding a smoother moving average. Conversely, a value nearing 0 focuses on the recent gradients, making the updates more adaptive [28]. #### 3.3.2 Moving Average of Squared Gradients: \(v\) Similarly, \(v\) captures the moving average of the squared gradients. It's updated as: \[v_{t}=\beta_{2}^{t}v+(1-\beta_{2}^{t})\phi\odot\phi, \tag{7}\] Here, \(\beta_{2}\) denotes the exponential decay rate for the moving average of squared gradients [17]. Analogous to \(\beta_{1}\) but for squared values, \(\beta_{2}^{t}\) is the adjusted decay rate at the \(t^{th}\) iteration, defined as: \[\beta_{2}^{t}=\beta_{2}\times\text{decay}. \tag{8}\] The element-wise multiplication \(\odot\) ensures that each gradient component's squared value is computed individually [2]. ### Bias Correction Given the nature of moving averages, especially when initialized with zeros, the early estimates of \(m\) and \(v\) can be significantly biased towards zero. To address this, bias correction is employed to adjust these moving averages [17]. #### 3.4.1 Correcting the Bias in \(m\) The bias-corrected value of \(m\) at the \(t^{th}\) iteration is: \[m_{\text{corr}_{t}}=\frac{m_{t}}{1-\beta_{1}^{t}}, \tag{9}\] Here, the term \(1-\beta_{1}^{t}\) serves as a corrective factor to counteract the initial bias [28]. #### 3.4.2 Correcting the Bias in \(v\) Similarly, for \(v\): \[v_{\text{corr}_{t}}=\frac{v_{t}}{1-\beta_{2}^{t}}, \tag{10}\] This correction ensures that the state variables \(m\) and \(v\) provide unbiased estimates of the first and second moments of the gradients, respectively [17]. ### Parameter Update StochGradAdam optimizes model parameters by adapting to both the historical gradient and the statistical properties of the current gradient [17]. The update rule for model parameter \(\mathbf{w}_{t}\) at iteration \(t\) is: \[\mathbf{w}_{t+1}=\mathbf{w}_{t}-\alpha\frac{m_{\text{corr}_{t}}}{\sqrt{v_{ \text{corr}_{t}}}+\epsilon}, \tag{11}\] The update can be viewed as an adaptive gradient descent step. By normalizing the gradient using its estimated mean and variance, StochGradAdam effectively scales parameter updates based on their historical and current behavior [2]. StochGradAdam synergizes the principles of stochastic gradient sampling with the Adam optimizer's robustness. ## 4 Experimental Results In the subsequent section, we delve into the empirical evaluation of the methodologies discussed thus far. The primary objective of our experiments is to validate the theoretical assertions and gauge the performance of our proposed techniques in real-world scenarios. Through a series of meticulously designed experiments, we compare our methods against established benchmarks, thereby offering a comprehensive assessment of their efficiency, robustness, and scalability. Each experiment is crafted to answer specific questions, shedding light on the strengths and potential limitations of our approach. By coupling rigorous experimental design with diverse datasets and tasks, we aspire to provide readers with a holistic understanding of the practical implications of our contributions. ### Image Classification #### 4.1.1 Cifar-10 In the image classification, the accuracy of various deep learning architectures are often deployed to identify the best model for a particular task. This section focuses on the CIFAR-10 dataset[18], a widely recognized benchmark in the field, to compare the performance of several neural network architectures and optimizers. We train each architecture at a learning rate of 0.01 and implement the model using TensorFlow. The displayed figure 2 provides a detailed visualization of the test accuracy for various deep learning architectures when trained using different optimizers. Notably, our newly introduced method is depicted in red, distinguishing its performance from the rest. ResNet-56: Our optimizer manifests a swift uptick in test accuracy during the initial epochs, outpacing other methodologies. Although slight deviations become apparent towards the culmination of training, the overall trend suggests competitive prowess. ResNet-110: With this architecture in play, our optimizer emulates the trajectory observed in ResNet-56. An impressive ascent during the incipient phases is evident, and despite minor oscillations in later epochs, the optimizer's performance remains robust. ResNet-156: This architecture reveals an expeditious surge in test accuracy using our optimizer during the early training, subsequently aligning with trends showcased by other optimizer methodologies. MobileNetV2: Our optimizer commences with a formidable performance, either paralleling or slightly eclipsing the Adam optimizer across epochs. Despite some intermittent fluctuations, the trend underscores its commendable adaptability to the MobileNetV2 architecture. ViT-8: Employed on the Vision Transformer model, our optimizer portrays steady advancement in test accuracy, showcasing a consistent and relatively stable performance. The original ViT model was optimized using the Adam method. Notably, when trained with RMSProp at a learning rate of 0.01, the ViT model stagnated, registering a mere 10% test accuracy - reminiscent of random decision-making. VGG-16: Contrasting the trends observed with other architectures, our optimizer, in tandem with other methods, doesn't scale to the pinnacle of test accuracy when employed on VGG-16. While the results closely mirror each other across the optimizer spectrum, our method's performance remains indistinguishable. The rationale for the discernible dip in VGG-16's performance relative to other Figure 1: Comparison of test accuracy over 300 epochs on CIFAR-10 dataset for various neural network architectures: ResNet-56, ResNet-110, ResNet-156, MobileNetV2, VIT-8, and VGG-16. Three different optimizers - RMSprop (green), Adam (blue), and StochGradAdam (red) - were used to train each model. The graphs showcase how each architecture and optimizer combination performs over the course of training. architectures is hypothesized to stem from gradient diminution. This decrease in gradient magnitude can adversely affect gradient sampling. It's surmised that the unique characteristics of VGG and ViT, particularly in the context of gradient dynamics, might not be optimally suited for our new optimizer. The diminishing gradients observed might be adversely influencing gradient sampling. A more nuanced exploration of this phenomenon and its implications will be delved into within the "Limitation" section of our research. In summation, our optimizer showcases promising test accuracy metrics across an array of architectures, especially in the nascent training phases. Its rapid convergence coupled with unwavering performance earmarks it as a potent contender for image classification tasks, specifically on the CIFAR-10 dataset ### Segmentation Following our detailed discussion on classification, we ventured into another pivotal domain of deep learning - segmentation. For our experiments, we adopted the Unet-2 architecture[26] integrated with MobileNetV2 [29], ensuring a balance between computational efficiency and performance. Our optimizer of choice for these experiments was StochGradAdam, running at a learning rate of 0.001. We sourced our dataset from the renowned oxford_iiit_pet [25], which provides a robust set of images for segmentation tasks. The results of our experiments can be observed in the figure above. A close inspection of the visualizations reveals the strength of our StochGradAdam optimizer. The "Ours Prediction" column demonstrates a more precise and coherent segmentation when compared to other widely-used optimizers like Adam and RMSProp. The boundaries are sharper, and the segmentation masks align more accurately with the true masks, accentuating the prowess of StochGradAdam in driving better model performance. To conclude, StochGradAdam not only showed promise in the realm of classification but also established its potential in segmentation tasks. Our findings are corroborated by the comparative visual analysis, setting a new benchmark for future optimizers. Figure 2: Comparative visualization of segmentation results on the oxford_iiit_pet dataset using the Unet-2 architecture integrated with MobileNetV2 across different optimizers, including StochGradAdam, Adam, and RMSProp Analysis: Uncertainty Reduction in Prediction Probabilities Having observed the performance of various optimizers in our experimental results, one might wonder about the intricacies beyond mere accuracy metrics. The efficacy of an optimizer is not solely gauged by its ability to minimize the loss function but also by its influence on the uncertainty of the model's predictions. In the realm of deep learning, where models make probabilistic predictions across multiple classes, it's crucial to delve into how different optimizers shape these probabilities. In this section, we explore the role of optimizers in determining the entropy of prediction probabilities and discuss its implications for uncertainty in predictions. ### Entropy: A Measure of Uncertainty For a discrete probability distribution \(P=(p_{1},p_{2},...,p_{n})\), entropy, denoted as \(H(P)\), provides a measure of the uncertainty or randomness of the distribution [31]: \[H(P)=-\sum_{i=1}^{n}p_{i}\log(p_{i}) \tag{12}\] A distribution that is entirely certain (one probability is 1 and the others are 0) will have an entropy of 0. On the contrary, a uniform distribution, characterized by maximum uncertainty, will possess the highest entropy [7]. To ensure the interpretability and comparability of entropy values, especially when working with distributions over different numbers of outcomes, we resort to normalized entropy [27]: \[H_{\text{normalized}}(P)=\frac{H(P)}{\log(n)} \tag{13}\] With this normalization, entropy values are confined between 0 (absolute certainty) and 1 (absolute uncertainty). ### The Role of Optimizers An optimizer's main objective is to update model parameters to minimize the associated loss [11]. As this optimization unfolds, the model starts achieving a better fit to the data, leading to predictions marked by heightened confidence [2]. From an entropy perspective, this translates to: \[H(P_{\text{initial}})>H(P_{\text{optimized}}) \tag{14}\] Here, \(P_{\text{initial}}\) stands for the prediction probabilities before the optimization and \(P_{\text{optimized}}\) denotes them post-optimization. The entropy reduction is symbolic of the declining uncertainty in predictions [22]. Different optimizers traverse the parameter space distinctively [28]. While some might directly target the global minima, others might explore the space more expansively. These varied approaches can influence both the rate and magnitude of entropy reduction [17]. To quantitatively gauge the prowess of each optimizer: \[\Delta H_{O}=H(P_{\text{initial}})-H(P_{\text{final}}) \tag{15}\] A superior value of \(\Delta H_{O}\) underlines the optimizer's proficiency in diminishing prediction uncertainty [40]. ### Comparative Visualization: Histograms For a more intuitive understanding of the influence of different optimizers on prediction uncertainty, we can employ histograms that depict the distribution of normalized entropies subsequent to optimization. This analysis was conducted utilizing TensorFlow, with experiments performed on the ResNet-56[14] architecture applied to the CIFAR-10 dataset[18], all while maintaining a learning rate of 0.01. The presented figure 3 showcases contrasting characteristics between RMSProp, Adam, and StochGradAdam (Ours) in their approach to prediction uncertainty throughout the epochs. Initially, each optimizer displays a wide distribution in normalized entropy, reflecting a mixture of prediction confidences. However, by the 100th epoch, a distinguishing feature of StochGradAdam becomes apparent. Its histogram is markedly skewed towards the lower normalized entropy values, implying that a significant portion of its predictions are made with high confidence. RMSProp and Adam, on the other hand, also depict enhancements, but they do not attain the same degree of prediction certainty as rapidly as StochGradAdam. As training advances to epochs 200 and 300, StochGradAdam persistently upholds its superior performance, ensuring that its predictions remain substantially certain. Meanwhile, while both RMSProp and Adam make strides, their histograms still display a more distributed range of entropy values, suggesting some residual uncertainties in their predictions. StochGradAdam demonstrates exceptional proficiency in swiftly minimizing prediction uncertainty. This allows for models trained with it to reach confident predictions at a faster rate compared to those trained with RMSProp and Adam. This effectiveness potentially makes StochGradAdam a more favorable option for situations that require rapid convergence to assured predictions. ### Comparative Visualization: PCA In the pursuit of comprehending the nuances of various optimizers, visual illustrations provide crucial perspectives. This becomes particularly enlightening when evaluating their efficacy on benchmarked datasets and architectures, exemplified by the performance of ResNet56 on CIFAR-10, trained over 300 epochs with a learning rate of 0.01. Dimensionality reduction, facilitated through the renowned technique of Principal Component Analysis (PCA), serves as a pivotal tool to unveil underlying patterns within data structures [16]. Figure 4 offers a side-by-side visualization comparison, portraying the uncertainty landscape of neural networks based on the assessment of 10,000 test data post-training. At the outset, around the 10th epoch, all methods, including RMSProp, Adam, and Ours, display a scatter of data points spread across the triangle's breadth. This suggests that the optimizers are still in the initial stages, exploring the feature space to determine the best direction for steepest Figure 3: Comparison of the distribution of normalized entropy across different optimizers (RMSProp, Adam, and StochGradAdam) at various training epochs (10, 100, 200, and 300). The histograms depict the frequency of a specific range of normalized entropy values, illustrating how the uncertainty in predictions evolves as training progresses. descent. RMSProp, at this stage, has a denser cluster at the vertex of the triangle with sparse data points radiating outward, indicating a certain segment of data points has found an optimization path, but the majority are still in search [33]. Adam's distribution leans towards the left, hinting at early tendencies to converge to specific regions in the feature space. In contrast, our method shows an evident concentration of data points towards the triangle's base, implying that it has already started discerning the optimal path more effectively than the other two. By the 200th epoch, the differences between the optimizers become more prominent. RMSProp maintains a widespread distribution, indicating progression but still some distance to optimal convergence. Adam displays a tighter congregation of data points around the center, reflecting its ability to reduce prediction uncertainties but not uniformly across all data points [41]. Our method, however, presents a dense clustering at the triangle's lower section, denoting higher prediction confidence. This suggests that our optimizer is not only adept at reducing prediction uncertainties but also exhibits superior dense clustering ability, which is indicative of its robustness and consistency [33]. By the 300th epoch, our method underscores its superiority with data points compactly clustered, indicating minimal prediction uncertainty and reinforcing the idea that dense clustering often translates to empirical success in real-world applications [41]. RMSProp and Adam, although showcasing progression, do not match the level of clustering and confidence exhibited by our method, emphasizing our method's superior performance in swiftly navigating the optimization landscape. Figure 4: PCA visualization of data processed with different optimizers at distinct training epochs. Each plot captures the distribution of data points in the reduced dimensional space, with color gradients representing normalized entropy. The visualizations across epochs not only illuminate the progression of each optimizer but also distinctly highlight our method's edge, especially in the latter stages, where it efficiently reduces prediction uncertainty and hints at potentially faster convergence. ### Inference The extent to which an optimizer can curtail prediction uncertainty carries profound implications: * **Robustness:** Diminished uncertainty often signals a model's robustness [12]. A model that consistently yields confident predictions across diverse datasets is likely more resilient against adversarial attacks [35] and noisy data [40]. This robustness is especially crucial in real-world applications where input data can be unpredictable [15]. * **Calibration:** Beyond just producing accurate predictions, a well-calibrated model ensures its predicted probabilities closely mirror the actual likelihoods [13]. This is pivotal in probabilistic forecasting and risk assessment scenarios [10]. When a model's predicted confidence levels align with observed outcomes, users and downstream systems can trust and act upon its predictions with more assurance [19]. * **Decision Making:** In applications from medical diagnostics to financial forecasting, the degree of certainty in a prediction often holds as much weight as the prediction itself [21]. For instance, in medical settings, high certainty in a negative diagnosis can potentially prevent unnecessary treatments, leading to better patient outcomes and cost savings [23]. * **Efficient Resource Allocation:** In large-scale applications, models providing certainty in their predictions allow for better resource allocation [5]. For instance, in automated systems, tasks based on high-certainty predictions can be expedited, while those with low-certainty predictions can be flagged for human review [1]. * **Feedback Loop:** Optimizers reducing prediction uncertainty can also aid in creating a constructive feedback loop during training [4]. As the model becomes more certain of its predictions, the feedback it provides for subsequent training iterations is more reliable, leading to a virtuous cycle of consistent improvement [34]. While the predominant evaluation criterion for optimizers has traditionally been their speed and efficiency in reducing the loss function, their capability to mitigate prediction uncertainties is equally vital [30]. Recognizing this can guide researchers and practitioners in making informed choices, ensuring the models they deploy are not only accurate but also reliably confident in their predictions [24]. ## 6 Discussion In our exploration of our optimizer, we've delved deep into its intricacies, nuances, and potential advantages in the realm of neural architectures. The results garnered from various architectures illuminate not just the merits of our approach but also the subtleties of how different neural architectures respond to gradient manipulations. As with any methodological advance, while the advantages are manifold, it is crucial to be cognizant of the boundaries and constraints. Before presenting our conclusive thoughts on the methodology, it is pertinent to discuss the limitations observed during our study. The understanding of these constraints not only provides clarity about the method's scope but also lays the groundwork for potential future improvements. ### Limitations Our approach to gradient sampling has demonstrated effectiveness in neural architectures like ResNet, MobileNet. These architectures employ residual connections or other mechanisms that help alleviate the vanishing gradient problem, preserving gradient flow throughout the layers. However, deeper architectures, such as VGG, without these mitigating features, have posed challenges. This limitation is likely rooted in the vanishing gradient problem prevalent in deep architectures without such protective mechanisms. I explain the reason: #### 6.1.1 Deep Gradient Vanishing Considering a deep architecture, the error gradient at a given layer \(l\) can be approximated by the recursive relation: \[\delta^{(l)}=(W^{(l)})^{T}\delta^{(l+1)}\circ f^{\prime(l)}(z^{(l)}) \tag{16}\] Where \(\delta^{(l)}\) is the gradient error for layer \(l\), \(W^{(l)}\) is the weight matrix for layer \(l\), \(f^{\prime(l)}\) is the derivative of the activation function for the layer's output \(z^{(l)}\), and \(\circ\) denotes element-wise multiplication. When layers are deep, and \(|f^{\prime(l)}(z^{(l)})|<1\) for several layers, the product of these derivatives becomes exponentially small, leading to the gradient \(\delta^{(l)}\) becoming negligible for the initial layers. #### 6.1.2 Quantitative Analysis of Gradient Decay Given \(|f^{\prime(l)}(z^{(l)})|\leq\beta\) where \(0<\beta<1\) for all \(l\), then for \(L\) layers: \[|\delta^{(1)}|\leq\beta^{L}|\delta^{(L)}| \tag{17}\] If \(L\) is large and \(\beta\) is slightly less than 1, the gradient at the first layer \(|\delta^{(1)}|\) can be vanishingly small compared to the gradient at the last layer \(|\delta^{(L)}|\). #### 6.1.3 Consequences for Gradient Sampling Our gradient sampling strategy is contingent upon capturing and updating using the most informative gradient components. In the face of gradient vanishing, the magnitudes in earlier layers are dwarfed, reducing their informativeness. When we stochastically sample from a distribution where most gradients have negligible magnitude, the variance of the sampled gradients increases. This increase in variance, in tandem with already minute gradients, hampers the optimization's directionality, leading to inefficient weight updates. While our gradient sampling technique offers promising results in architectures equipped with mechanisms to counteract the vanishing gradient issue, it might not be universally applicable across all deep learning models. Especially for architectures like VGG, which lack built-in gradient preservation mechanisms, more research and adaptations are required to fully leverage the potential of our approach. ### Future Work There is a pressing need for further research to address the gradient issues observed in certain deep architectures when using the StochGradAdam optimizer. Exploring solutions to mitigate the vanishing gradient problem, especially in architectures without inherent gradient preservation mechanisms, will be crucial. This will not only enhance the optimizer's applicability across a broader range of architectures but also ensure consistent and efficient training outcomes. ## 7 Conclusion In the realm of deep learning optimization, the introduction of the StochGradAdam optimizer marks a significant stride forward. Central to its design is the innovative gradient sampling technique, which not only ensures stable convergence but also potentially mitigates the effects of noisy or outlier data. This approach fosters robust training and enhances the exploration of the loss landscape, leading to more dependable convergence. Throughout the empirical evaluations, StochGradAdam consistently demonstrated superior performance in various tasks, from image classification to segmentation. Especially noteworthy is its ability to reduce prediction uncertainty, a facet that goes beyond mere accuracy metrics. This reduction in uncertainty is indicative of the model's robustness and its potential resilience against adversarial attacks and noisy data. However, like all methodologies, StochGradAdam has its limitations. While it excels in architectures like ResNet and MobileNet, challenges arise in deeper architectures like VGG, which lack certain mitigating features. This limitation is believed to be rooted in the vanishing gradient problem, prevalent in deep architectures without protective mechanisms. Nevertheless, the successes of StochGradAdam underscore the potential for further innovation in gradient-based optimizations. Its rapid convergence, adaptability across diverse architectures, and ability to reduce prediction uncertainty set a new benchmark for future optimizers in deep learning.
2310.01608
Neural Network Emulation of Spontaneous Fission
Large-scale computations of fission properties are an important ingredient for nuclear reaction network calculations simulating rapid neutron-capture process (the r process) nucleosynthesis. Due to the large number of fissioning nuclei contributing to the r process, a microscopic description of fission based on nuclear density functional theory (DFT) is computationally challenging. We explore the use of neural networks (NNs) to construct DFT emulators capable of predicting potential energy surfaces and collective inertia tensors across the whole nuclear chart. We use constrained Hartree-Fock-Boguliubov (HFB) calculations to predict the potential energy and collective inertia tensor in the axial quadrupole and octupole collective coordinates, for a set of nuclei in the r-process region. We then employ NNs to emulate the HFB energy and collective inertia tensor across the considered region of the nuclear chart. Least-action pathways characterizing spontaneous fission half-lives and fragment yields are obtained using the nudged elastic band method. The potential energy predicted by NNs agrees with the DFT value to within a root-mean-square error of 500 keV, and the collective inertia components agree to within an order of magnitude. The exit points on the outer turning line are found to be well emulated. For the spontaneous fission half-lives the NN emulation provides values that are found to agree with the DFT predictions within a factor of $10^3$ across more than 70 orders of magnitude. Neural networks are able to emulate the potential energy and collective inertia well enough to reasonably predict physical observables. Future directions of study, such as the inclusion of additional collective degrees of freedom and active learning, will improve the predictive power of microscopic theory and further enable large-scale fission studies.
Daniel Lay, Eric Flynn, Samuel A. Giuliani, Witold Nazarewicz, Leó Neufcourt
2023-10-02T19:59:38Z
http://arxiv.org/abs/2310.01608v2
# Neural Network Emulation of Spontaneous Fission ###### Abstract **Background:** Large-scale computations of fission properties are an important ingredient for nuclear reaction network calculations simulating rapid neutron-capture process (the \(r\) process) nucleosynthesis. Due to the large number of fissioning nuclei potentially contributing to the \(r\) process, a microscopic description of fission based on nuclear density functional theory (DFT) is computationally challenging. **Purpose:** We explore the use of neural networks (NNs) to construct DFT emulators capable of predicting potential energy surfaces and collective inertia tensors across the whole nuclear chart, starting from a minimal set of DFT calculations. **Methods:** We use constrained Hartree-Fock-Boguliubov (HFB) calculations to predict the potential energy and collective inertia tensor in the axial quadrupole and octupole collective coordinates, for a set of nuclei in the \(r\)-process region. We then employ NNs to emulate the HFB energy and collective inertia tensor across the considered region of the nuclear chart. Least-action pathways characterizing spontaneous fission half-lives and fragment yields are then obtained by means of the nudged elastic band method. **Results:** The potential energy predicted by NNs agrees with the DFT value to within a root-mean-square error of 500 keV, and the collective inertia components agree to within an order of magnitude. These results are largely independent of the NN architecture. The exit points on the outer turning line are found to be well emulated. For the spontaneous fission half-lives the NN emulation provides values that are found to agree with the DFT predictions within a factor of \(10^{3}\) across more than 70 orders of magnitude. **Conclusions:** Neural networks are able to emulate the potential energy and collective inertia well enough to reasonably predict physical observables. Future directions of study, such as the inclusion of additional collective degrees of freedom and active learning, will improve the predictive power of microscopic theory and further enable large-scale fission studies. ## I Introduction Large scale calculations of fission properties are an essential ingredient for the modelling of the rapid neutron-capture process (\(r\) process), responsible for the production of roughly half of the nuclei heavier than iron found in nature [1; 2]. Fission determines the range of the heaviest nuclei that can be synthesized during the \(r\)-process, recycles the material during the neutron irradiation phase, and shapes the final abundances [3; 4; 5]. Given the large amount of energy released in this decay, the presence of fissioning nuclei can leave fingerprints in the electromagnetic counterpart produced in neutron star mergers [6; 7]. However, as most of the fissioning nuclei produced during the \(r\) process cannot be measured, theoretical predictions are indispensable to perform accurate nuclear reaction network calculations. During the last decades, several efforts have been devoted to the systematic estimation of fission barriers [8; 9; 10; 11; 12; 13], spontaneous fission half-lives [14; 15; 13], and fragment distributions [16; 17; 18; 19; 20] of \(r\)-process nuclei. However, due to the inherent complexities characterizing the theoretical description of the fission process [21], most of the available calculations resort to phenomenological approaches based on simplified assumptions. This limitation can be overcome by employing the nuclear DFT [22; 23; 24], which is the quantum many-body method based on effective nucleon-nucleon interactions applicable across the whole nuclear landscape. But given its computational costs, using DFT for fission is a daunting task for large-scale studies of \(r\)-process nuclei [25; 21; 26]. As such, the usage of DFT emulators can be an invaluable tool to extend the current reach of microscopic fission calculations. Machine learning has been used with great success in many areas of nuclear physics (see [27] for a recent review on this topic). In particular, machine learning has been used in many DFT studies to emulate potential energy surfaces (PESs), in both quantum chemistry [28; 29; 30; 31] and in nuclear physics [32; 33]. However, these have generally focused on emulating individual potential energy surfaces, rather than many nuclei across a portion of the nuclear chart (or many related chemical systems in the quantum chemistry case). In an important study, Ref. [34] succeeded in emulating PESs and other quantities using committees of multilayer neural networks. In this study, we use fully connected, feedforward NNs to emulate the PES and collective inertia tensor, parameterized by the axial quadrupole and octupole moments \(Q_{20}\) and \(Q_{30}\), between nuclei in the \(r\)-process region of the nuclear chart. The paper is organized as follows: Section II reviews the theoretical approach to spontaneous fission used in this work. Section III describes the characteristics of the employed NNs. Section IV demonstrates the performance of the NNs on the HFB energy and collective inertia tensor, and Sec. V compares the exit points and spontaneous fission half-lives obtained using the DFT inputs and the emulated NN inputs. Finally, conclusions are summarized in Sec. VI. ## II Spontaneous fission within the nuclear density functional theory Spontaneous fission (SF) is a dynamical process where the nucleus evolves from the ground-state into a split configuration. In the adiabatic approximation, SF is modeled using a finite set of collective variables \(\{q_{i}\}\) usually describing the nuclear shape. The SF half-life can be computed within this approach as \(t_{1/2}=\ln 2/nP_{\text{fs}}\), where \(n\) is the number of assaults on the fission barrier, and \(P_{\text{fs}}\) the fission probability given by the probability of the nucleus to tunnel through the fission barrier, which can be estimated using the semiclassical Wentzel-Kramers-Brillouin (WKB) approach [35]: \[P_{\text{fs}}=\frac{1}{1+\exp{(2S(L))}}, \tag{1}\] where \(S(L)\) is the collective action computed along the stationary trajectory \(L[s]\) that minimizes \(S\) in the multi-dimensional space defined by the collective coordinates: \[S(L[s])=\frac{1}{\hbar}\int_{s_{\text{in}}}^{s_{\text{out}}}\sqrt{2\mathcal{ M}_{\text{eff}}(s)(V(s)-E_{0})}\;ds\,, \tag{2}\] with \(V\) and \(\mathcal{M}_{\text{eff}}\) being the potential energy and inertia tensor, respectively, computed along the fission path \(L[s]\). The integration limits \(s_{\text{in}}\) and \(s_{\text{out}}\) correspond to the classical inner and outer turning points, respectively, defined by the condition \(V=E_{0}\), where \(E_{0}\) is the collective ground-state zero-point energy stemming from quantum fluctuations in the collective coordinates. While the latter can be estimated from, e.g., the curvature of \(V\) around the ground state (g.s.) configuration, in many SF studies \(E_{0}\) is taken as a fixed positive constant ranging between 0.5 and 2.0 MeV above the ground-state energy [15; 13]. For simplicity, we follow the latter approach and fix \(E_{0}=E_{\text{g.s.}}\). And, throughout this work, we will refer to the collective coordinates at \(s_{\text{out}}\) (in this work, \((Q_{20},Q_{30})\)) as the exit point [36]. From Eq. (2) it can be deduced that the main ingredients required for the estimation of the SF half-lives are the effective potential energy \(V\) and collective inertia \(\mathcal{M}_{\text{eff}}\). In this work, we compute these quantities by employing the self-consistent mean-field method [22; 24] summarized in the following. Nuclear configurations are obtained by means of the HFB method, where the many-body wave function \(|\Psi\rangle\), described as a generalized quasiparticle product state, is given by the minimization of the mean value of the Routhian: \[\widehat{\mathcal{H}}^{\prime}=\widehat{\mathcal{H}}_{\text{HFB}}-\sum_{\tau= n,p}\lambda_{\tau}\widehat{N}_{\tau}-\sum_{\mu=1,2,3}\lambda_{\mu}\widehat{Q}_{ \mu 0}\,. \tag{3}\] In Eq. (3), \(\widehat{\mathcal{H}}_{\text{HFB}}\) is the HFB Hamiltonian, and \(\lambda_{Z}\) and \(\lambda_{N}\) are the Lagrange multipliers fixing the average number of protons and neutrons, respectively. The shape of the nucleus is enforced by constraining the moment operator \(\widehat{Q}_{\mu\nu}\) with multipolarity \(\mu\) and magnetic quantum number \(\nu\). In this work, we explore the evolution of the total energy and collective inertia tensor as a function of the elongation of the nucleus and its mass asymmetry, which are described by the axial quadrupole \(Q_{20}\) and octupole \(Q_{30}\) moment operators, respectively: \[\widehat{Q}_{20} =\hat{z}^{2}-\frac{1}{2}(\hat{x}^{2}+\hat{y}^{2})\,; \tag{4a}\] \[\widehat{Q}_{30} =\hat{z}^{3}-\frac{3}{2}(\hat{y}^{2}+\hat{x}^{2})\hat{z}\,. \tag{4b}\] In order to reduce the computational cost, axial symmetry is enforced in all the calculations (\(\langle\widehat{Q}_{\mu\nu}\rangle=0\) for all \(\nu\neq 0\)), and the additional constraint \(\langle\widehat{Q}_{10}\rangle=0\) is imposed to remove the spurious center-of-mass. Finally the nuclear HFB Hamiltonian \(\widehat{\mathcal{H}}_{\text{HFB}}\) is given by the finite-range density-dependent nucleon-nucleon Gogny interaction. We employ the D1S parametrization [37], which has been widely used in nuclear structure studies across the whole nuclear chart including the description of fission properties of heavy and superheavy nuclei [38]. The effective potential is then given by \(V=E-E_{\text{rot}}\), where \(E\) is the energy obtained from the HFB equations for the Routhian (3), and \(E_{\text{rot}}\) is the energy correction related to the restoration of rotational symmetry, computed using the approach of Ref. [39]. Calculations are carried out by employing the HFB solver HFBaxial, which solves the HFB equations by means of a gradient method with an approximate second-order derivative [40]. The quasiparticle wave functions are expanded in an axially-symmetric deformed harmonic oscillator single-particle basis, containing states with \(J_{z}\) quantum number up to 35/2 and up to 26 quanta in the \(z\)-direction. The basis quantum numbers are restricted by the condition \(2n_{\perp}+|m|+n_{z}/q\leq N_{z}^{\text{max}}\), where \(q=1.5\) and \(N_{z}^{\text{max}}=17\). This choice of the basis parameters allows for a proper description of the elongated prolate shapes characteristic of the fission process [41]. The collective inertia tensor \(\mathcal{M}_{\mu\nu}\) is computed within the Adiabatic-Time-Dependent HFB (ATDHFB) approximation using the non-perturbative scheme [42; 43; 44]: \[\mathcal{M}_{\mu\nu}=\frac{\hbar^{2}}{2\hat{q}_{\mu}\hat{q}_{\nu}}\sum_{\alpha \beta}\frac{F_{\alpha\beta}^{\mu*}F_{\alpha\beta}^{\nu}+F_{\alpha\beta}^{\mu}F _{\alpha\beta}^{\nu*}}{E_{\alpha}+E_{\beta}}\,, \tag{5}\] where \(q_{i}\) are the collective coordinates and \[\frac{F^{\mu}}{\dot{q}_{\mu}}=A^{\dagger}\frac{\partial\rho}{\partial q_{\mu}}B^{ *}+A^{\dagger}\frac{\partial\kappa}{\partial q_{\mu}}A^{*}-B^{\dagger}\frac{ \partial\rho^{*}}{\partial q_{\mu}}A^{*}-B^{\dagger}\frac{\partial\kappa^{*}}{ \partial q_{\mu}}B^{*} \tag{6}\] is given in terms of the matrices of the Bogoliubov transformation \(A\) and \(B\), and the corresponding particle \(\rho\) and pairing \(\kappa\) densities. Then, the effective inertia tensor is given as \[\mathcal{M}_{\rm eff}=\sum_{\mu\nu}\mathcal{M}_{\mu\nu}\frac{dq_{\mu}}{ds} \frac{dq_{\nu}}{ds}\,. \tag{7}\] It is important to remark that the \(\mathcal{M}_{\mu\nu}\) components can suffer from rapid oscillations in the presence of single-particle level crossings near the Fermi surface. Such abrupt changes of occupied single-particle configurations produce variations in the derivatives of the densities in Eq. (6), resulting in pronounced peaks of \(\mathcal{M}_{\rm eff}\) along the fission path [43; 44; 45]. The least action paths are computed using the nudged elastic band method (NEB) [36]. Due to the large number of paths that must be explored, NEB parameters cannot be tuned by hand. Instead, multiple NEB runs are started, with initial paths ending at various points along the outer turning line. The NEB algorithm depends on two parameters, \(k\) and \(\kappa\), which adjust spring and harmonic restoring forces, respectively. Not varying \(k\) and \(\kappa\) will, on occasion, miss some LAPs, akin to skipping over a narrow minimum in an optimization routine. Different runs are started for \(k\) and \(\kappa\) in the range \(0.05-10\), for each initial path. These runs converge to a number of different stationary paths. Typically, there is some component of the path that travels along the outer turning line. To select the final tunneling path, the paths are interpolated using 500 points, and truncated when near the outer turning line and within an energy tolerance of 0.5 MeV. The unique paths are chosen based on the clustering of the exit point using the mean shift algorithm as implemented in scikit-learn [46], and the path corresponding to a given exit point with the least action is chosen as the LAP. ## III Neural networks In this work, we use feedforward NNs as our emulators. We train separate NNs on the potential energy \(V\) and the components of \(\mathcal{M}\). Each NN takes as input \((A,Z,Q_{20},Q_{30})\), specifying the nucleus and deformation, then outputs the value (either \(V\) or one of \(\mathcal{M}_{\mu\nu}\)) at that point. As discussed in Sec. IV.1, to further improve NN performance, we rescale the NN inputs to lie between zero and one. We train NNs with a number of hidden layers varying between 2 and 7, with 200 hidden nodes in the first layer, and a decreasing number of nodes in each subsequent layer. We use the RELU activation function, and train to minimize the root-mean-square error in the desired quantity. For each variant on the NN depth, we train multiple NNs, forming a committee of NNs. We then combine the predictions from each NN in the committee in a weighted average, to further reduce the error on the prediction. To train the NNs, we have computed PESs and the collective inertia for 194 nuclei, each on a regular grid of 4 b for \(0\leq Q_{20}\leq 248\) b, and 6 b\({}^{3/2}\) for \(0\leq Q_{30}\leq 60\) b\({}^{3/2}\). These nuclei are then labeled as either training, combining, or validation, with the latter two sampled randomly across the chart. These different datasets are indicated in Fig. 1. For each nucleus, the entire grid is used in the training/combining/validation. The nuclei in the training set are used to train individual NNs, the nuclei in the combining dataset are used to combine predictions from the committee members in a weighted average, and the nuclei in the validation set are used for validation of the NN predictions. The weights for each committee member are chosen to minimize the root-mean-square error on the nuclei in the combining dataset. As can be seen, most of the nuclei (about 70%) are used for training, with the remaining 30% split equally between the combining and validation datasets. In general, the NN performance is not sensitive to the distribution of training data, provided the NN does not attempt to extrapolate across the nuclear chart. No detailed optimization of the choice of training nuclei was carried out. As mentioned in Sec. II, it is known that the collective inertia tensor can develop discontinuities and rapid variations due to level crossings. This makes emulation of the tensor challenging since the tensor components can span many orders of magnitude as a function of deformation. If the NN is trained on the inertia tensor components by themselves, the network predictions are poor. However, while these problems are features of the approximations used to calculate the inertia, the NN can still learn certain features of the inertia tensor by carrying out the eigenvalue decomposition of the inertia tensor, \[\mathcal{M}=U\Sigma U^{T} \tag{8}\] where \(U\) is the \(2\times 2\) matrix of eigenvectors and \(\Sigma\) is the diagonal matrix of eigenvalues. Since \(U\) is an orthogonal matrix, we can represent \(U\) as an element of the set SO(2) parameterized by Euler angle \(\theta\). In this representation, \(\mathcal{M}\) is completely parameterized by its eigenvalues and the Euler angle \(\theta\). So, the NN is trained on \(\theta\) and the log of the eigenvalues at each point (\(Q_{20}\), \(Q_{30}\)). Training on this representation of the tensor is similar to normalizing the network inputs, as both put NN inputs/outputs on a similar scale. Additionally, this forces the tensor predictions to be positive semi-definite. We also transform \(\theta\) to the range \((-\pi/2,\pi/2)\), so that the angles are mostly clustered near zero (on the interval \((0,\pi)\), there are two clusters: one at 0 and one at \(\pi\), which the NN has difficulties learning). Once the NNs are trained, PESs and inertias are com puted for the same grid of deformations as the original DFT calculations. While the NNs can be evaluated at arbitrary \((Q_{20},Q_{30})\), it is less computationally expensive to use a standard cubic spline interpolator on the grid predicted by the NN. Moreover, the LAPs computed using the NN evaluations and the spline interpolator agree well with each other. Due to the relatively large number of LAP calculations, we report the LAPs computed using the spline-interpolated NN predictions, rather than using the NN predictions themselves. ## IV Neural Network Quality Here, we examine the quality of the NNs, on both the PES and the collective inertia. In general, we observe that the NN is able to reproduce both the PES and the collective inertia for most of the nuclei under consideration. Moreover, the quality of the NN is relatively stable across the different architectures considered. Throughout this section, we will refer to the PES and collective inertia computed using DFT as the reference data, and the PES and inertia computed using the NN as the NN reconstruction. ### Potential energy surfaces For a single nucleus, we define the root-mean-square error (RMSE) \(\Delta V(A,Z)\) in energy over the collective domain considered as \[\Delta V(A,Z)^{2}=\frac{1}{n}\sum_{Q_{20},Q_{30}}[V^{\rm DFT}(Q_{ 20},Q_{30},A,Z)\\ -V^{\rm NN}(Q_{20},Q_{30},A,Z)]^{2}, \tag{9}\] where \(n=693\) is the number of gridpoints evaluated in the PES. A similar quantity can be defined for the components of \(\mathcal{M}_{\rm eff}\), although there \(n\) varies slightly from nucleus to nucleus. Figure 1 shows \(\Delta V(A,Z)\) across the region of the nuclear chart considered, for the deepest NN (7 hidden layers, with 200-175-150-125-100-75-50 hidden units), with rescaled inputs. As can be seen, for most nuclei, \(\Delta V(A,Z)\lesssim 0.5\) MeV. Exceptions occur, with most remaining below 1.5 MeV. For some nuclei, such as \({}^{308}\)Cf, \({}^{314}\)Fm, and \({}^{318}\)No, relatively poor performance may be expected: these nuclei are on the outer edge of the region of the nuclear chart considered, and hence the NN is extrapolating from the training region to reach them. For other nuclei, such as \({}^{232}\)Th and \({}^{280}\)Cm, poor performance is unexpected: these nuclei are surrounded by training nuclei, and so should be emulated fairly well. As such, it seems unlikely that poor performance is due solely to the location of the nucleus on the nuclear chart relative to the training data. To understand the reduced performance, we examine the nuclei in question. Figure 2 shows both the reference PES and its NN reconstruction for \({}^{280}\)Cm. This nucleus is chosen because it has \(\Delta V=2.15\) MeV, which is the largest of all nuclei in the validation set. While the PESs are not identical, features such as the ground state and outer turning line locations, as well as the general shape of the PES, agree quite well. Large discrepancies tend to be limited to the high-energy region of \(Q_{20}\approx 50\) b, \(Q_{30}\gtrsim 30\) b\({}^{3/2}\), which is hardly relevant to fission. We conclude therefore that even for nuclei with larger RMSE, NNs could provide a very reasonable description of the fission path. This aspect will be examined further in Sec. V. To assess the sensitivity of our results with respect to the NN architecture, we repeated our calculations employing different NN sizes and rescaling the inputs. Figure 3 shows the RMSE (now averaged across all nuclei in a given dataset) across the different datasets for a variety of NN depths. As is generally expected, the training dataset has a monotonically decreasing RMSE as the NN depth increases; this is simply due to the increasing number of tunable parameters in the NN. On the other hand, the RMSE for the combining and validation sets is fairly stable with respect to the number of hidden layers of the NN. A general improvement is observed when normalizing the inputs \((A,Z,Q_{20},Q_{30})\) to be between 0 and 1. This is due to two factors. First, Figure 1: \(\Delta V(A,Z)\) (in MeV) for the deepest NN. The different shapes indicate which dataset each nucleus belongs to. Figure 2: The reference PES (left) and the NN reconstruction (right) for \({}^{280}\)Cm, in MeV. The ground state is marked with a \(\times\) symbol. invariant. An input much larger than 1 is equivalent to a large initial weight, with a normalized input. Because the final NN weights are expected to be small (hence initializations following e.g. the Xavier initialization [47], as in this work), the initial weights are far from the final values, and convergence slows. Second, the optimization method itself is not scale-invariant: non-normalized inputs correspond to an ill-conditioned Hessian matrix, in which case gradient descent (and related methods) converge slowly [48; 49]. We conclude that the NN performance in predicting the PES is relatively stable with respect to the NN architecture; Sec. V will demonstrate that performance on this level is adequate for predicting SF observables. ### Collective Inertia Since the components of \(\mathcal{M}\) vary across multiple orders of magnitude and the network is trained on the log of the eigenvalue decomposition, a loss function such as the root-mean-squared error is not an adequate measure of the performance of the NN. Instead, Fig. 4 shows the reference inertia components, plotted against the NN reconstructions, for all nuclei considered. The NN used is the 7-hidden-layer NN with rescaled inputs, with the number of hidden units as described in Sec. IV.1. The diagonal components \(\mathcal{M}_{22}\) and \(\mathcal{M}_{33}\) are predicted fairly well, as the distributions align roughly along the diagonal. It is worth noting that the distributions are slightly misaligned, in all data sets considered, indicating that the NN tends to underpredict relatively large values, and overpredict relatively small values. This, in turn, shows that the NN is slightly biased towards the mean value of the inertia. However, the off-diagonal component \(|\mathcal{M}_{23}|\) is not aligned along the diagonal, except for large values. This is because this component actually varies across almost 10 orders of magnitude (compared to the 4 orders of magnitude for \(\mathcal{M}_{22}\) and \(\mathcal{M}_{33}\)), and so the NN is biased towards predicting the larger values more accurately, resulting in a general overprediction of \(\mathcal{M}_{23}\). In terms of the angle \(\theta\) that is actually determined by the NN, it is difficult to predict both small and large angles, and because \(\theta\) is allowed to be negative, a logarithm transform is not possible. Nevertheless, one obtains a reasonable-looking distribution above \(|\mathcal{M}_{23}|\gtrsim 10^{-4}\,\text{MeV}^{-1}\,\text{b}^{-5/2}\), indicating that some learning has indeed taken place. And, the poorly-learned values below \(10^{-4}\,\text{MeV}^{-1}\,\text{b}^{-5/2}\) are truncated at values \(10^{-6}-10^{-2}\,\text{MeV}^{-1}\,\text{b}^{-5/2}\). When changing the depth of the NN, performance is similar. For shallow networks, predictions on the training dataset show a larger bias: the distribution of points on the inertia plot is less aligned with the diagonal for the \(\mathcal{M}_{22}\) and \(\mathcal{M}_{33}\) components. In other words, the larger reference values are underestimated, and the smaller reference values are overestimated. The validation dataset is aligned similar to the deepest network, shown in Fig. 4. As the depth of the network is increased, the training data points are aligned closer with the diagonal. This is indicative of the NN tending to overfit on the training data as the number of variational parameters increases. The distribution of \(\mathcal{M}_{23}\) values remains approximately Figure 3: The RMSE for a variety of different NN sizes. The dashed line shows the same depth NN, but with input variables normalized to the range \([0,1]\). the same when increasing NN depth, with a slight improvement on the truncated \(\mathcal{M}_{23}\) values. In general, the NN performance on the validation dataset is mostly stable when varying the NN depth. The overarching question is whether this performance is sufficient for predicting observable quantities of interest. As with the PES, this question can be directly answered by looking at NN predictions of physical observables. ## V Impact on Observable Quantities While encouraging, the results discussed in Sec. IV do not give a perfectly clear estimation of the performance of the NNs. For instance, the NN reconstruction of the PES for \({}^{280}\)Cm may be adequate for reproducing fission observables - especially SF fragment yields and half-lives, despite the poor RMSE, since the largest deviations occur at deformations that will not be explored by LAPs. Similarly, the NN commonly fails to reproduce the off-diagonal component of the collective inertia, \(\mathcal{M}_{23}\), but primarily for small values of \(\mathcal{M}_{23}\). Here, we examine the performance of the NN on the lifetime-weighted exit point, as a proxy for the fragment yield [20, 50], and the half-life of the nucleus. For both quantities, we compare three sets of data: the quantity computed using (i) the reconstructed PES and the identity inertia; (ii) using the reference PES and the reconstructed inertia; and (iii) the reconstructed PES and inertia. In this way, we can isolate the impact of the PES and inertia emulations separately, and combine them to assess the overall error of the emulator. In this section, we use the 7-hidden-layer NN with rescaled inputs, with a number of hidden units described in Sec. IV.1. Based on the relative insensitivity to the depth of the NN shown in Sec. IV, the overall performance is expected to be similar for different NN depths. Similar to Sec. IV, exit points and SF half-lives computed using only DFT inputs will be referred to as reference quantities; those with any NN input will be referred to as reconstructed quantities. ### Exit Points As demonstrated in [20, 50], the location of the exit points is sufficient for roughly estimating the fission fragment yields. For this reason, we can consider exit points as reasonable proxies for the fragment yields. When multiple fission channels exist, the combined fragment yields are attained by adding the yields of each channel, weighted by the probability of populating a particular channel. Thus, agreement of the lifetime-weighted exit point indicates strong agreement in the fission fragment yields (and, by necessity, indicates that the dominant fission mode is also in agreement between the reference data and the NN reconstruction). Figure 5 shows the difference in the octupole moment of the lifetime-weighted exit point, for configuration (iii) mentioned above. The octupole moment is chosen because it is critical for explaining multimodality in SF. The \(Q_{30}\) error is similar for the other configurations, and the quadrupole moment is typically within \(\pm 1\,\mathrm{b}\) for all configurations. The agreement is good between the reference exit point and the NN reconstruction: at \(\pm 1\)\(\mathrm{b}^{3/2}\), we expect the fragment yields to agree well (within the hybrid method of Refs. [20, 50]). This agreement is mainly due to the accurate PES reconstruction, as previous studies have shown that the exit point location is fairly robust with respect to variations in the collective inertia [50, 51, 52, 45]. This agreement holds even for nuclei whose PES reconstruction has a large error, such as \({}^{280}\)Cm, indicating that the qualitative features shown in Fig. 2 are reconstructed well enough to describe multimodality in SF. Notice, however, that the exit point locations are not reproduced perfectly for some nuclei, especially in the thorium (\(Z=90\)) chain, where the difference can be as much as \(5\,\mathrm{b}^{3/2}\). This is not due to the PES reconstruction: Figure 1 shows that the thorium isotopes have RMSE \(\Delta V(Z=90)\lesssim 100\) keV, and the exit point reconstruction when considering configuration (i) is within \(1\,\mathrm{b}^{3/2}\) of the reference value. Additionally, a side-by-side comparison of the collective inertia components does not show a systematic deviation between the reference inertia and the NN reconstruction. Nevertheless, the error is due to the inaccurate collective inertia reconstruction. However, it is not a systematic error. Rather, random error is present for every deformation considered, and it is the accumulation of this random error that causes the discrepancy. While the location of any individual exit point is not sensitive to the collective inertia, the probability of tunneling to a particular point depends on the probability given in Eq. (1). Because the probability is exponentially dependent on the action (and therefore exponentially dependent on the collective inertia reconstruction), comparatively small errors can add up and actually switch the Figure 5: The \(Q_{30}\) component of the reconstructed lifetime-weighted exit point, minus the reference \(Q_{30}\) component, in \(\mathrm{b}^{3/2}\). These results were computed using configuration (iii), i.e. the NN was used to reconstruct both the PES and the collective inertia. dominant exit point, from asymmetric to symmetric and vice versa. This is especially important for nuclei with a wide fission barrier, as the cumulative error along the path is large. In general, we observe that both the PES and the collective inertia are emulated well enough to predict exit points that agree with the reference data. And, for most nuclei, the dominant mode is also in agreement. Together, this means that the SF fragment yields are in agreement between the reference data and the NN reconstruction for most nuclei under consideration. ### Spontaneous fission half-lives In this section we examine the performance of the NN when predicting the SF half-life, \(t_{1/2}^{\text{sf}}\). For the sake of simplicity, we do not include triaxiality and pairing correlations as collective degrees of freedom, despite their large impact on the predicted \(t_{1/2}^{\text{sf}}\)[53; 54; 55; 56; 51]. Figure 6 shows \(t_{1/2}^{\text{sf}}\) computed using the reference data vs. \(t_{1/2}^{\text{sf}}\) computed using the NN reconstruction, for configurations (i) and (iii) mentioned above (results for configuration (ii) are similar to those of (iii)). As can be seen, the \(t_{1/2}^{\text{sf}}\) predictions agree well, typically within 3 orders of magnitude across the approximately 80 orders of magnitude under consideration. Figure 6(a) demonstrates that the PES reconstruction is sufficient to predict \(t_{1/2}^{\text{sf}}\) values that agree well with the reference values. As with the SF fragment yields, this is true even for nuclei with a large \(\Delta V\), e.g. \({}^{280}\)Cm, once again demonstrating that the PES emulation quality is indeed sufficient to reproduce SF observables. Figure 6(b) includes the collective inertia emulation. As can be seen, the reproduced \(t_{1/2}^{\text{sf}}\) values agree less well with the reference values, although the disagreement is still within 3 orders of magnitude for most nuclei. This is not unexpected: Sec. V.1 shows that the collective inertia emulation, while sufficient for most nuclei, is not accurate enough for all nuclei. Similar to Sec. V.1, the reason for the disagreement in \(t_{1/2}^{\text{sf}}\) is the accumulation of random errors when the fission pathway goes across the fission barrier. Now, rather than changing the dominant fission mode, \(t_{1/2}^{\text{sf}}\) is simply changed from the reference value in a more-or-less random manner. The effect is most prominent for long-lived nuclei, where errors in the collective inertia add up to a fairly large value as the pathway traverses a wider fission barrier. While it may be desirable in principle to improve the emulation, the nuclei whose \(t_{1/2}^{\text{sf}}\) values are reproduced with a large error are those predicted to be stable to SF, within the \((Q_{20},Q_{30})\) collective space. As such, errors in the SF observables have little effect on results that are further dependent on \(t_{1/2}^{\text{sf}}\), such as \(r\)-process network calculations. The inset panels in Fig. 6 magnify the range \(10^{-5}-10^{10}\) s, to highlight the relevant \(r\)-process range. As can be seen, almost all nuclei within this range are reproduced nicely within three orders of magnitude. Therefore, we conclude that NNs are able to reproduce both the PES and the collective inertia well enough that \(t_{1/2}^{\text{sf}}\) is reproduced within 3 orders of magnitude for nuclei for which SF is relevant in the \(r\)-process region. ## VI Conclusions In this work, we have shown that fully connected feed-forward NNs are able to emulate both the potential energy and the collective inertia across a region of the nuclear chart, in the collective space consisting of the axial quadrupole and octupole moments. In general, the emulation error on the potential energy is about 500 keV, and the largest discrepancies are found in high-energy regions far from the fission path. The inertia tensor is reproduced within roughly an order of magnitude. We find that the NN performance is stable with respect to changes in the architecture, while the rescaling of input variables produces a general improvement overall. Most of the exit points predicted by the NN agree with the DFT predictions within a \((\Delta Q_{20},\Delta Q_{30})=(2\,\text{b},1\,\text{b}^{3/2})\) range. The SF half-lives are usually reproduced within a factor \(10^{3}\) over a span of more than 70 orders of magnitude. We find that the largest source of discrepancies is the emulation of the collective inertia tensor, due to the rapid changes of the inertia tensor in regions where single-particle level crossings are present. For some very long-lived nuclei, the associated error accumulates along the wider fission barrier. Conversely, in nuclei where fission can be a major decay mode, the emulations are in Figure 6: The half-life predicted using the DFT reference data, \(t_{1/2}^{\text{sf}\text{-DFT}}\), plotted against the half-life computed using the NN reconstruction, \(t_{1/2}^{\text{sf}\text{-NN}}\). Panel (a) shows configuration (i), in which only the PES is emulated; panel (b) shows configuration (iii), in which both the PES and the collective inertia are emulated. The black line marks the diagonal: \(t_{1/2}^{\text{sf}\text{-DFT}}=t_{1/2}^{\text{sf}\text{-NN}}\). Gray bars are drawn at \(t_{1/2}^{\text{sf}\text{-DFT}}\times 10^{\pm 3}\), i.e. 3 orders of magnitude above and below the diagonal. Insets show the range \(10^{-5}-10^{10}\,\text{s}\), to highlight the relevant \(r\)-process range.
2304.05297
Neural Network Approach to Portfolio Optimization with Leverage Constraints:a Case Study on High Inflation Investment
Motivated by the current global high inflation scenario, we aim to discover a dynamic multi-period allocation strategy to optimally outperform a passive benchmark while adhering to a bounded leverage limit. To this end, we formulate an optimal control problem to outperform a benchmark portfolio throughout the investment horizon. Assuming the asset prices follow the jump-diffusion model during high inflation periods, we first establish a closed-form solution for the optimal strategy that outperforms a passive strategy under the cumulative quadratic tracking difference (CD) objective, assuming continuous trading and no bankruptcy. To obtain strategies under the bounded leverage constraint among other realistic constraints, we then propose a novel leverage-feasible neural network (LFNN) to represent control, which converts the original constrained optimization problem into an unconstrained optimization problem that is computationally feasible with standard optimization methods. We establish mathematically that the LFNN approximation can yield a solution that is arbitrarily close to the solution of the original optimal control problem with bounded leverage. We further apply the LFNN approach to a four-asset investment scenario with bootstrap resampled asset returns from the filtered high inflation regime data. The LFNN strategy is shown to consistently outperform the passive benchmark strategy by about 200 bps (median annualized return), with a greater than 90% probability of outperforming the benchmark at the end of the investment horizon.
Chendi Ni, Yuying Li, Peter A. Forsyth
2023-04-11T15:48:19Z
http://arxiv.org/abs/2304.05297v2
# Neural Network Approach to ###### Abstract Motivated by the current global high inflation scenario, we aim to discover a dynamic multi-period allocation strategy to optimally outperform a passive benchmark while adhering to a bounded leverage limit. To this end, we formulate an optimal control problem to outperform a benchmark portfolio throughout the investment horizon. Assuming the asset prices follow the jump-diffusion model during high inflation periods, we first establish a closed-form solution for the optimal strategy that outperforms a passive strategy under the cumulative quadratic tracking difference (CD) objective, assuming continuous trading and no bankruptcy. To obtain strategies under the bounded leverage constraint among other realistic constraints, we then propose a novel leverage-feasible neural network (LFNN) to represent control, which converts the original constrained optimization problem into an unconstrained optimization problem that is computationally feasible with standard optimization methods. We establish mathematically that the LFNN approximation can yield a solution that is arbitrarily close to the solution of the original optimal control problem with bounded leverage. We further apply the LFNN approach to a four-asset investment scenario with bootstrap resampled asset returns from the filtered high inflation regime data. The LFNN strategy is shown to consistently outperform the passive benchmark strategy by about 200 bps (median annualized return), with a greater than 90% probability of outperforming the benchmark at the end of the investment horizon. **Keywords:** cumulative tracking difference, leveraged portfolio, benchmark outperformance, asset allocation, machine learning **JEL codes:** G11, G22 **AMS codes:** 91G, 35Q93, 68T07 ## 1 Introduction Since the global outbreak of COVID-19 in March 2020, there has been a significant increase in worldwide inflation. Specifically, from May 2021 to February 2023, the 12-month change in the CPI index in the U.S. has not dropped below 5% (Bureau of Labor Statistics, 2023). Prior to the pandemic, the U.S. economy experienced nearly four decades of low inflation. The abrupt shift from a long-term low-inflation environment to a high-inflation environment has created substantial uncertainty and volatility in the financial markets. In 2022, the technology-heavy NASDAQ stock index recorded a yearly return of -33.10% (NASDAQ, 2023). Equally concerning is the uncertainty around the duration of this round of high inflation. Some believe that the geopolitical tensions and the COVID-19 pandemic will overturn the trend of globalization and lead to global supply chain restructuring (Javorcik, 2020) which may result in a higher cost of production in the foreseeable future. Moreover, Ball et al. (2022) suggests that the future inflation rate may remain high if the unemployment rate remains low. In this article, we aim to answer the following question: with the goal of outperforming a passive benchmark, how should an active investor optimize the portfolio during high inflation? It is important to note that we do not attempt to make predictions about future inflation conditions. Instead, we approach the problem by formulating a multi-period optimal control problem that considers bounded leverage constraints and specific investment criteria. This optimal control problem requires the specification of an appropriate objective function, realistic constraints, as well as stochastic models for returns of traded assets during high inflation regimes. Given the complexities and challenges associated, it is crucial to develop an efficient method capable of computing optimal solutions, accommodating flexible data sources, handling high-dimensional cases, and dealing with complex constraints. In this paper, we propose a framework to address these challenges. In Section 2, we assume that the real (inflation-adjusted) asset returns during a high-inflation regime follow stochastic processes and treat allocation decisions as the control of a dynamic system. Specifically, we formulate an optimal control problem to outperform a fixed-mix benchmark portfolio consistently throughout the investment horizon by minimizing a cumulative quadratic tracking difference (CD) objective. There is a large amount of extant literature on closed-form solutions for beating a stochastic benchmark under synthetic market assumptions (Browne, 1999, 2000; Tepla, 2001; Basak et al., 2006; Davis and Lleo, 2008; Lim and Wong, 2010; Oderda, 2015; Alekseev and Sokolov, 2016; Al-Aradi and Jaimungal, 2018). In these articles, the common objective function involves a log-utility function, e.g. the log wealth ratio. Under the log wealth ratio formulation, it is often hard to accommodate a fixed stream of cash injections, which is a common characteristic of open-ended funds. Forsyth et al. (2022) consider a scenario where a fixed amount of cash injections is allowed and provides a closed-form solution under a cumulative quadratic tracking difference (CD) objective given the assumption that the stock price follows a double exponential jump-diffusion model and the bond price is deterministic. Since the assumption that the bond index price is stochastic and has jumps is more reasonable under a high-inflation scenario, we develop a closed-form solution under the case that both the stock index and the bond index follow jump-diffusion models. The closed-form solution is derived, unfortunately, under unrealistic assumptions such as continuous rebalancing, infinite leverage, and continued trading when insolvent. A discrete-time multi-period asset allocation problem is generally solved using a dynamic programming (DP) based approach, which converts a multi-step optimization problem into multiple single-step optimization problems. However, van Staden et al. (2023) point out that dynamic programming-based approaches require the evaluation of a high-dimensional performance criterion to obtain the optimal control which is comparatively low-dimensional. This means that solving the discrete-time problem numerically using dynamic programming-based techniques (for example numerical solutions to the corresponding PIDE (Wang and Forsyth, 2010), or reinforcement learning (RL) techniques (Dixon et al., 2020; Park et al., 2020; Lucarelli and Borrotti, 2020; Gao et al., 2020)) are inefficient and are computationally prone to known issues such as error amplification over recursions (Wang et al., 2020). Acknowledging these limitations, in Section 2.7, we propose to use a single neural network model to approximate the optimal control and solve the original optimal control problem directly via a single standard finite-dimensional optimization. This direct approximation of the control exploits the lower dimensionality of the optimal control and bypasses the problem of solving high-dimensional conditional expectations associated with DP methods. We note that the idea of using a neural network to directly approximate the control process is also used in Han et al. (2016); Buehler et al. (2019); Tsang and Wong (2020); Reppen et al. (2022), in which they propose a stacked neural network approach that includes individual sub-networks for each rebalancing step. In contrast, we propose a single shallow neural network that includes time as an input feature, and thus avoids the need to have multiple sub-networks for each rebalancing step and greatly reduces the computational and modeling complexity. Furthermore, using time as a feature in the neural network approximation function is consistent with the observation that (under assumptions) the optimal control is naturally a continuous function of time, which we discuss in detail in Section 2.4. The idea of using a single neural network to approximate controls has also been explored in previous studies such as Li and Forsyth (2019) and Ni et al. (2022). These studies focus on portfolio optimization problems with long-only constraints. The neural network architecture proposed in these studies transforms the constrained portfolio optimization problems into unconstrained optimization problems, making them computationally easier to solve. However, these existing neural network architectures do not address the bounded leverage constraint, which limits the total long exposure in the portfolio. The limited literature on portfolio optimization with bounded leverage is likely due to the added complexity arising from the combination of long and short positions in the portfolio. A significant contribution of this article is the introduction of a novel leverage-feasible neural network (LFNN) model, which converts the leverage-constrained optimization problem into an unconstrained optimization problem. This model enables the incorporation of the bounded leverage constraint into the portfolio optimization framework. Additionally, in Section 2.8, we provide a mathematical proof that, under reasonable assumptions, the solution of the unconstrained optimization problem obtained using the LFNN model can approximate the optimal control of the original problem arbitrarily well. This mathematical justification validates the effectiveness and validity of the LFNN approach. In Section 3, we present a case study on active portfolio optimization in a high-inflation regime. To identify historical high-inflation periods, we employ a simple filtering method. Subsequently, we use bootstrap resampling to generate training and testing data sets, which consist of price paths for four assets: the equal-weighted and cap-weighted stock indexes, as well as the 30-day and 10-year U.S. treasury indexes. Using the leverage-feasible neural network (LFNN) model and the cumulative quadratic tracking shortfall (CS) objective, we derive a leverage-constrained strategy for portfolio optimization. Our results demonstrate that the LFNN model produces a strategy that consistently outperforms the fixed-mix benchmark. Specifically, the strategy achieves a median (annualized) internal rate of return (IRR) that is more than 2% higher than the benchmark. Moreover, there is a probability of over 90% that the strategy will yield a higher terminal wealth compared to the benchmark. These findings highlight the efficacy of the LFNN model in optimizing portfolios under high-inflation conditions. By incorporating the bounded leverage constraint and utilizing the CS objective, our approach enables investors to achieve superior performance and mitigate risks in a high-inflation environment. Our contributions are summarized below: 1. To gain intuition about the behavior of the optimal controls, we derive the closed-form solution under a jump-diffusion asset price model and other typical assumptions (such as continuous rebalancing) for a two-asset case. The closed-form solution provides important insights into the properties of the optimal control as well as meaningful interpretations of the neural network models that approximate the controls. 2. We propose to represent the control directly by a neural network representation so that the stochastic optimal control problem can be solved numerically under realistic constraints such as discrete rebalancing and limited leverage. Particularly, we propose the novel leverage-feasible neural network (LFNN) model to convert the original complex leverage-constrained optimization problem into an unconstrained optimization problem that can be solved easily by standard optimization methods. 3. We prove that, with a suitable choice of the hyperparameter of the LFNN model, the solution of the parameterized unconstrained optimization problem can approximate the optimal control arbitrarily well. This provides a mathematical justification for the validity of the LFNN approach. This is further supported by the numerical results that the performance of the LFNN model matches the clipped form of the closed-form solution on simulated data. 4. In the case study on active portfolio optimization in high-inflation, we apply the neural network method to bootstrap resampled asset returns with four underlying assets, including the equal-weighted/cap-weighted stock indexes, and the 30-day/10-year treasury bond indexes. The dynamic strategy from the learned LFNN model outperforms the fixed-mix benchmark strategy consistently throughout the investment horizon, with a 2% higher median (annualized) internal rate of return (IRR), and more than 90% probability of achieving a higher terminal wealth. Furthermore, the learned allocation strategy suggests that the equal-weighted stock index and short-term bonds are preferable investment assets during high-inflation regimes. ## 2 Outperform dynamic benchmark under bounded leverage ### Sovereign wealth funds and benchmark targets Instead of taking a passive approach, some of the largest sovereign wealth funds often adopt an active management philosophy and use passive portfolios as the benchmark to evaluate the efficiency of active management. For example, the Canadian Pension Plan (CPP) uses a base reference portfolio of 85% global equity and 15% Canadian government bonds (CPP Investments, 2022). Another example is the Government Pension Fund Global of Norway (also known as the oil fund) managed by Norges Bank Investment Management (NBIM), which uses a benchmark index consisting of 70% equity index and 30% bond index.1 The benchmark equity index is constructed based on the market capitalization for equities in the countries included in the benchmark. The benchmark index for bonds specifies a defined allocation between government bonds and corporate bonds, with a weight of 70 percent to government bonds and 30 percent to corporate bonds (Norges Bank, 2022). Footnote 1: The Ministry of Finance of Norway sets the allocation fraction between the equity index and the bond index. It gradually raised the weight for equities from 60% to 70% from 2015-2018. However, the excess return that these well-known sovereign wealth funds have achieved over their respective passive benchmark portfolios cannot be described as impressive. In the 2022 fiscal year report, CPP claims to have beaten the base reference portfolio by an annualized 80 bps after fees over the past 5 years (CPP Investments, 2022). On the other hand, NBIM reports a mere average of 27 bps of annual excess return over the benchmark over the last decade (see Table 2.1). It is worth noting that these behemoth funds achieve seemingly meager results by hiring thousands of highly paid investment professionals and spending billions of dollars on day-to-day operations. For example, the CPP 2021 annual report (CPP Investments, 2021) lists personnel costs as CAD 938 million, for 1,936 employees, which translates to average costs of about CAD 500,000 per employee-year. The stark contrast between the enormous spending of sovereign wealth funds and the meager outperformance of the funds relative to the passive benchmark portfolios is probably provocative to taxpayers and pensioners who invest their hard-earned money in the funds. Equally concerning is the potential of a long, persistent inflation regime and the funds' ability to consistently beat the benchmark portfolio in such times. After all, both the CPP Investments and NBIM were established in the late 1990s, a decade after the last long inflation period ended in the mid-1980s. These concerns prompt us to ask the following question: in a presumed persistent high-inflation environment, can a fund manager find a simple asset allocation strategy that consistently beats the benchmark passive portfolios by a reasonable margin (preferably without spending billions of dollars in personnel costs)? ### Mathematical formulation In this section, we mathematically formulate the problem of outperforming a benchmark. Let \([t_{0}(=0),T]\) denote the investment horizon, and let \(W(t)\) denote the wealth (value) of the portfolio actively managed \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline Year & 2012 & 2013 & 2014 & 2015 & 2016 & 2017 & 2018 & 2019 & 2020 & 2021 & Average \\ \hline Excess return (\%) & 0.21 & 0.99 & -0.77 & 0.45 & 0.15 & 0.70 & -0.30 & 0.23 & 0.27 & 0.74 & 0.27 \\ \hline \end{tabular} \end{table} Table 2.1: Norges Bank Investment Management, relative return to benchmark portfolio by the manager at time \(t\in[t_{0},T]\). We refer to the actively managed portfolio as the "active portfolio". Furthermore, let \(\hat{W}(t)\) denote the wealth of the benchmark portfolio at time \(t\in[t_{0},T]\). To ensure a fair assessment of the relative performance of the two portfolios, we assume both portfolios start with an equal initial wealth amount \(w_{0}>0\), i.e., \(W(t_{0})=\hat{W}(t_{0})=w_{0}>0\). Technically, the admissible sets of underlying assets for the active and passive portfolio need not be identical. However, for simplicity, we assume that both the active portfolio and the benchmark portfolio can allocate wealth to the same set of \(N_{a}\) assets. Let vector \(\mathbf{S}(t)=(S_{i}(t):i=1,\cdots,N_{a})^{\top}\in\mathbb{R}^{N_{a}}\) denote the asset prices of the \(N_{a}\) underlying assets at time \(t\in[t_{0},T]\). In addition, let vectors \(\mathbf{p}(t)=(p_{i}(t):i=1,\cdots,N_{a})^{\top}\in\mathbb{R}^{N_{a}}\) and \(\mathbf{\hat{p}}(t)=(\hat{p}_{i}(t):i=1,\cdots,N_{a})^{\top}\in\mathbb{R}^{N_{a}}\) denote the fraction of wealth allocated to the \(N_{a}\) underlying assets at time \(t\in[t_{0},T]\), respectively, for the active portfolio and the benchmark portfolio. From a control theory perspective, the allocation vector \(\mathbf{p}\) can be regarded as the control of the system, as it determines how the wealth of the active portfolio evolves over time. We will seek to find the optimal feedback control. In other words, the closed-loop controls (allocation decisions) are assumed to also depend on the value of the state variables (e.g. portfolio wealth). Therefore, we consider the control \(\mathbf{p}\) to be a function of time as well as the relevant state variables. In addition, the benchmark portfolio allocation \(\mathbf{\hat{p}}\) can be regarded as a known function of its state variables and time as well. Mathematically, \(\mathbf{p}(\mathbf{X}(t))=(p_{i}(\mathbf{X}(t)):i=1,\cdots,N_{a})^{\top}\in\mathbb{R}^{N_ {a}}\) and \(\mathbf{\hat{p}}(\hat{\mathbf{X}}(t))=(\hat{p}_{i}(\hat{\mathbf{X}}(t)):i=1,\cdots,N_{a}) ^{\top}\in\mathbb{R}^{N_{a}}\), where \(\mathbf{X}(t)\in\mathcal{X}\subseteq\mathbb{R}^{N_{x}}\) and \(\hat{\mathbf{X}}(t)\in\hat{\mathcal{X}}\subseteq\mathbb{R}^{N_{x}}\) are the state variables taken into account by the active portfolio and the benchmark portfolio respectively. Here we include \(t\) in \(\mathbf{X}(t)\) and \(\hat{\mathbf{X}}(t)\) for notational simplicity. In this article, we consider the particular problem of outperforming a passive portfolio, in which \(\mathbf{X}(t)=\big{(}t,W(t),\hat{W}(t)\big{)}^{\top}\). We assume that the active portfolio and the benchmark portfolio follow the same rebalancing schedule denoted by \(\mathcal{T}\subseteq[t_{0},T]\). In the case of discrete rebalancing, \(\mathcal{T}\subset[t_{0},T]\) is a discrete set. In the case of continuous rebalancing, \(\mathcal{T}=[t_{0},T]\), i.e., rebalancing happens continuously throughout the entire investment horizon. Additionally, we assume both portfolios follow the same deterministic sequence of cash injections, defined by the set \(\mathcal{C}=\{c(t),\;t\in\mathcal{T}_{c}\}\), where \(\mathcal{T}_{c}\subseteq[t_{0},T]\) is the schedule of the cash injections. When \(\mathcal{T}_{c}\) is a discrete injection schedule, \(c(t)\) is the amount of cash injection at \(t\). In the case of continuous cash injections, i.e., \(\mathcal{T}=[t_{0},T]\), \(c(t)\) is the rate of cash injection at \(t\), i.e., the total cash injection amount during \([t,t+dt]\) is \(c(t)dt\), where \(dt\) is an infinitesimal time interval. For simplicity, we assume that \(\mathcal{T}_{c}=\mathcal{T}\), so that the cash injections schedule is the same as the rebalancing schedule. At \(t\in\mathcal{T}\), \(W(t)\) and \(\hat{W}(t)\) always denote the wealth after the cash injection (assuming there is a cash injection event happening at \(t\)). The active and benchmark strategies, respectively, are defined as the sequence of the allocation fractions following the rebalancing schedule. Mathematically, the active and benchmark strategies are defined by sets \[\mathcal{P}=\{\mathbf{p}(\mathbf{X}(t)),\;t\in\mathcal{T}\},\quad\text{and}\quad\hat{ \mathcal{P}}=\{\mathbf{\hat{p}}(\hat{\mathbf{X}}(t)),\;t\in\mathcal{T}\}. \tag{2.1}\] Denote \(\mathcal{A}\) as the set of admissible strategies, which reflects the investment constraints on the controls. We assume that admissibility can vary with state and let \(\{\mathcal{X}_{i}\colon\,i=1,\cdots,k\}\) be a partition of \(\mathcal{X}\) (the state variable space), i.e. \[\left\{\begin{aligned} &\bigcup_{i=1}^{k}\mathcal{X}_{i}= \mathcal{X},\\ &\mathcal{X}_{i}\bigcap\mathcal{X}_{j}=\varnothing,\forall 1\leq i<j \leq k,\end{aligned}\right. \tag{2.2}\] and \(\{\mathcal{Z}_{i}\subseteq\mathbb{R}^{N_{a}}\colon\,i=1,\cdots,k\}\) be the corresponding value sets of feasible controls such that any feasible control \(\mathbf{p}\) satisfies \[\mathbf{p}(\mathbf{x})\in\mathcal{Z}_{i},\forall\mathbf{x}\in\mathcal{X}_{i},\;\forall i \in\{1,\cdots,k\}. \tag{2.3}\] We say that strategy \(\mathcal{P}\) is an admissible strategy, i.e., \(\mathcal{P}\in\mathcal{A}\), if and only if \[\mathcal{P}=\Big{\{}\mathbf{p}(\mathbf{X}(t)),\;t\in\mathcal{T}\;\Big{|}\;\mathbf{p}(\mathbf{X} (t))\in\mathcal{Z}_{i},\;\text{if}\;\mathbf{X}(t)\in\mathcal{X}_{i}\Big{\}} \tag{2.4}\] Consider a discrete rebalancing schedule \(\mathcal{T}=\{t_{j},\ j=0,\cdots,N\}\) with \(N\) rebalancing events, where \(t_{0}<t_{1}<\cdots<t_{N}=T\).2 Then, the wealth evolution of the active portfolio and the benchmark portfolio can be described by the equations Footnote 2: Technically, at \(t=t_{0}\), the manager makes the initial asset allocation, rather than a “rebalancing” of the portfolio. However, despite the different purposes, a rebalancing of the portfolio is simply a new allocation of the portfolio wealth. Therefore, for notational simplicity, we include \(t_{0}\) in the rebalancing schedule. \[\left\{\begin{aligned} W(t_{j+1})&=\Big{(}\sum \limits_{i=1}^{N_{a}}p_{i}(\mathbf{X}(t_{j}))\cdot\frac{S_{i}(t_{j+1})-S_{i}(t_{j}) }{S_{i}(t_{j})}\Big{)}W(t_{j})+c(t_{j+1}),\ j=0,\cdots,N-1,\\ \hat{W}(t_{j+1})&=\Big{(}\sum\limits_{i=1}^{N_{a}} \hat{p}_{i}(\hat{\mathbf{X}}(t_{j}))\cdot\frac{S_{i}(t_{j+1})-S_{i}(t_{j})}{S_{i}( t_{j})}\Big{)}\hat{W}(t_{j})+c(t_{j+1}),\ j=0,\cdots,N-1.\end{aligned}\right. \tag{2.5}\] In the continuous rebalancing case, \(\mathcal{T}=[t_{0},T]\). Let \(dS_{i}(t)\) denote the instantaneous change in price for asset \(i\), \(i\in[1,\cdots,N_{a}]\).3 Then, at \(t\in\mathcal{T}=[t_{0},T]\), the wealth dynamics of the active portfolio and the benchmark portfolio, following their respective strategies \(\mathcal{P}\) and \(\hat{\mathcal{P}}\), can be described by the equations Footnote 3: For illustration purposes, here we assume \(S_{i}(t),i\in[1,\cdots,N_{a}]\) follow standard diffusion processes, i.e., no jumps. We will discuss the case with jumps in detail in Section 2.4. \[\left\{\begin{aligned} dW(t)&=\Big{(}\sum \limits_{i=1}^{N_{a}}p_{i}(\mathbf{X}(t))\cdot\frac{dS_{i}(t)}{S_{i}(t)}\Big{)}W( t_{j})+c(t)dt,\\ dW\hat{W}(t)&=\Big{(}\sum\limits_{i=1}^{N_{a}} \hat{p}_{i}(\hat{\mathbf{X}}(t))\cdot\frac{dS_{i}(t)}{S_{i}(t)}\Big{)}\hat{W}(t_{ j})+c(t)dt.\end{aligned}\right. \tag{2.6}\] Let sets \(\mathcal{W}_{\mathcal{P}}=\{W(t),t\in\mathcal{T}\}\) and \(\hat{\mathcal{W}}_{\hat{\mathcal{P}}}=\{\hat{W}(t),t\in\mathcal{T}\}\) denote the wealth trajectories of the active portfolio and the benchmark portfolio following their respective investment strategies \(\mathcal{P}\) and \(\hat{\mathcal{P}}\). Let \(F(\mathcal{W}_{\mathcal{P}},\hat{\mathcal{W}}_{\hat{\mathcal{P}}})\in\mathbb{R}\) denote an investment metric that measures the performances of the active and benchmark strategies, based on their respective wealth trajectories. In this article, we assume that the asset prices \(\mathbf{S}(t)\in\mathbb{R}^{N_{a}}\) are stochastic. Then, the wealth trajectories \(\mathcal{W}_{\mathcal{P}}\) and \(\hat{\mathcal{W}}_{\hat{\mathcal{P}}}\) are also stochastic, as well as the performance metric \(F(\mathcal{W}_{\mathcal{P}},\hat{\mathcal{W}}_{\hat{\mathcal{P}}})\), which measures the relative performance of the active strategy with respect to the benchmark strategy. Therefore, when investment managers target to optimize an investment metric, the evaluation is often on the expectation of the random metric. Let \(\mathbb{E}_{\mathcal{P}}^{(t_{0},w_{0})}[F(\mathcal{W}_{\mathcal{P}},\hat{ \mathcal{W}}_{\hat{\mathcal{P}}})]\) denote the expectation of the value of the performance metric \(F\), with respect to a given initial wealth \(w_{0}=W(0)=\hat{W}(0)\) at time \(t_{0}=0\), following an admissible investment strategies \(\mathcal{P}\in\mathcal{A}\), and the benchmark investment strategy \(\hat{\mathcal{P}}\). Since the benchmark strategy is often pre-determined and known, we keep the benchmark strategy \(\hat{\mathcal{P}}\) implicit in this notation for simplicity. Subsequently we use \(\mathbb{E}_{\mathcal{P}}^{(t_{0},w_{0})}[F(\mathcal{W}_{\mathcal{P}},\hat{ \mathcal{W}}_{\hat{\mathcal{P}}})]\), the expectation of a desired performance metric, as the _(investment) objective function_ and solve \[\text{(Optimization problem):}\quad\inf_{\mathcal{P}\in\mathcal{A}} \mathbb{E}_{\mathcal{P}}^{(t_{0},w_{0})}\big{[}F(\mathcal{W}_{\mathcal{P}}, \hat{\mathcal{W}}_{\hat{\mathcal{P}}})\big{]}. \tag{2.7}\] ### Choice of investment objective The first step to designing a proper outperforming investment objective is to clarify the definition of _beating the benchmark_. In the context of measuring the performance of the portfolio against the benchmark, a common metric is the tracking error, which measures the volatility of the difference in returns, i.e., \[\text{Tracking error}=stdev(R-\hat{R}), \tag{2.8}\] where \(R\) denotes the return of the active portfolio, and \(\hat{R}\) denotes the return of the benchmark portfolio. Note that the returns of the active portfolio and the benchmark portfolio are determined from their respective wealth trajectories (\(\mathcal{W}_{\mathcal{P}}\) and \(\hat{\mathcal{W}}_{\hat{\mathcal{P}}}\)) that are evaluated under the same investment horizon and same market conditions. The tracking error measures the volatility of the difference in returns over the investment horizon. A criticism of the tracking error is that it only measures the variability in the difference in returns, but does not reflect the magnitude of the return difference itself. For example, an active strategy with a constant negative return difference over the investment horizon would yield a better tracking error than an active strategy with a positive but volatile return difference. For this reason, many prefer the tracking difference (Johnson et al., 2013; Hougan, 2015; Charteris and McCullough, 2020; Boyde, 2021), which is defined as the annualized difference between the active portfolio's cumulative return and the benchmark portfolio's cumulative return over a specific period. Note that both tracking error and tracking difference metrics measure the return difference of the active portfolio over the benchmark portfolio. In other words, these metrics measure how closely the return of the active portfolio tracks the return of the benchmark portfolio. In practice, if an investment manager aims to achieve a certain annualized relative return target, e.g., \(\beta\), then the tracking difference metric may not be appropriate. To address this, van Staden et al. (2022) suggests the investment objective \[\inf_{\mathcal{P}\in\mathcal{A}}\mathbb{E}_{\mathcal{P}}^{(t_{0},w_{0})} \Big{[}\Big{(}W(T)-e^{\beta T}\hat{W}(T)\Big{)}^{2}\Big{]}, \tag{2.9}\] where \(W(T)\) and \(\hat{W}(T)\) are the respective terminal wealth of the active portfolio and the benchmark portfolio at terminal time \(T\), and \(\beta\) is the annualized relative return target. The optimal control problem (2.9) aims to produce an active strategy that minimizes the quadratic difference between \(W(T)\) and the terminal portfolio value target of \(e^{\beta T}\hat{W}(T)\). In other words, the optimal control policy tries to outperform the benchmark portfolio by a total factor of \(e^{\beta T}\) over the time horizon \([0,T]\), which is equivalent to an annualized relative return of \(\beta\). The quadratic term of the difference incentivizes the terminal wealth of the active portfolio \(W(T)\) to closely track the _elevated target_\(e^{\beta T}\). It is worth noting that the relative return target \(\beta\) can be intuitively interpreted as the manager's willingness to take more risks. As \(\beta\downarrow 0\), the optimal solution to problem (2.9) is simply to mimic the benchmark strategy. However, as \(\beta\) grows larger, the manager needs to take on more risk (for more return) in order to beat the benchmark portfolio by the relative return target rate. A criticism of the investment objective (2.9) is that it is symmetrical in terms of the outperformance and underperformance of \(W(T)\) relative to the elevated target \(e^{\beta T}\hat{W}(T)\). This is a common issue for volatility-based measures, such as the Sharpe ratio (Ziemba, 2005). In practice, investors may favor outperformance more than underperformance, while still aiming to track the elevated target closely. Acknowledging this, instead of (2.9), Ni et al. (2022) propose the following asymmetrical objective function, \[\inf_{\mathcal{P}\in\mathcal{A}}\mathbb{E}_{\mathcal{P}}^{(t_{0},w_{0})}\Bigg{[} \Big{(}\min(W(T)-e^{\beta T}\hat{W}(T),0)\Big{)}^{2}+\max\big{(}W(T)-e^{\beta T }\hat{W}(T),0\big{)}\Bigg{]}. \tag{2.10}\] The investment objective (2.10) penalizes the outperformance (of \(W(T)\) relative to the elevated target \(e^{\beta T}\hat{W}(T)\)) linearly but the underperformance quadratically, thus encouraging the optimal policy to favor outperformance more than underperformance when necessary. Note that the use of objective function (2.10) does not permit closed-form solutions and machine learning techniques are used (Ni et al., 2022) to compute the desired optimal strategy numerically. Another criticism of the investment objective (2.9) and (2.10) is that both are only concerned with the relative performance at terminal time \(T\). In reality, investment managers are often required to report intermediate portfolio performance internally or externally at regular time intervals. Instead of achieving the annualized relative return target when reviewing the portfolio performance at the end of the investment horizon, managers may want to consistently achieve the relative return target throughout the entire investment horizon. In this case, managers may need to set an investment objective function to control the deviation of the wealth of the portfolio from the target along a market scenario within the investment horizon. Consequently, van Staden et al. (2022) propose the following cumulative quadratic tracking difference (CD) objectives \[(CD(\beta)):\quad\left\{\begin{aligned} &\inf_{\mathcal{P}\in\mathcal{A}} \mathbb{E}_{\mathcal{P}}^{(t_{0},w_{0})}\Bigg{[}\int_{t_{0}}^{T}\Big{(}W(t)-e^ {\beta t}\hat{W}(t)\Big{)}^{2}dt\Bigg{]},\;\text{if $\mathcal{T}=[t_{0},T]$,}\\ &\inf_{\mathcal{P}\in\mathcal{A}}\mathbb{E}_{\mathcal{P}}^{(t_{0},w_{0})}\Bigg{[}\sum_{t\in\mathcal{T}\cup\{T\}}\Big{(}W(t)-e^{\beta t}\hat{W}( t)\Big{)}^{2}\Bigg{]},\;\text{if $\mathcal{T}\subseteq[t_{0},T],\mathcal{T}$ discrete.}\end{aligned}\right. \tag{2.12}\] Here, note that objective (2.11) is for the continuous rebalancing case, and (2.12) for discrete rebalancing. Both (2.11) and (2.12) measure the cumulative deviation of the wealth of the active portfolio relative to the target, along a market scenario within the entire investment horizon. Therefore, they measure the intermediate performance deviations effectively. However, similar to (2.9), (2.11) and (2.12) penalize outperformance and underperformance symmetrically. Therefore, we also consider following cumulative quadratic shortfall (CS) objectives that only penalize the shortfall (underperformance with respect to the target) \[(CS(\beta)):\quad\left\{\begin{aligned} &\inf_{\mathcal{P}\in\mathcal{A}} \mathbb{E}_{\mathcal{P}}^{(t_{0},w_{0})}\Bigg{[}\int_{t_{0}}^{T}\Big{(}\min \big{(}W(t)-e^{\beta t}\hat{W}(t),0\big{)}\Big{)}^{2}dt+\epsilon W(T)\Bigg{]}, \;\text{if $\mathcal{T}=[t_{0},T]$,}\\ &\inf_{\mathcal{P}\in\mathcal{A}}\mathbb{E}_{\mathcal{P}}^{(t_{0},w_{0})}\Bigg{[}\sum_{t\in\mathcal{T}\cup\{T\}}\Big{(}\min\big{(}W(t)-e^{\beta t }\hat{W}(t),0\big{)}\Big{)}^{2}+\epsilon W(T)\Bigg{]},\;\text{if $\mathcal{T}\subseteq[t_{0},T], \mathcal{T}$ discrete.}\end{aligned}\right. \tag{2.13}\] Here (2.13) and (2.14) are the investment objectives for the continuous rebalancing and discrete rebalancing cases respectively. \(\epsilon\) is a small regularization parameter to ensure that problem (2.13) and (2.14) are well-posed. A more detailed comparison of the CD and CS objective functions can be found in Appendix H. ### Closed-form solution for CD problem In this section, we present the closed-form solution to the CD problem (2.11) under several assumptions. The closed-form solution not only provides us with insights for understanding the CD-optimal controls for problem (2.11), but also serves as the baseline for understanding the numerical results derived from the neural network method (discussed in later sections). Specifically, in this section, we consider the case that all asset prices follow jump-diffusion processes and portfolios with cash injections, which are aspects not frequently considered in benchmark outperformance literature Browne (1999, 2000); Tepla (2001); Basak et al. (2006); Yao et al. (2006); Zhao (2007); Davis and Lleo (2008); Lim and Wong (2010b); Oderda (2015); Zhang and Gao (2017); Al-Aradi and Jaimungal (2018b); Nicolosi et al. (2018); Bo et al. (2021). We first summarize the assumptions for obtaining the closed-form solution to CD problem (2.11). **Assumption 2.1**.: _(Two assets, no friction, unlimited leverage, trading in insolvency, constant rate of cash injection) The active portfolio and the benchmark portfolio have access to two underlying assets, a stock index, and a constant-maturity bond index. Both portfolios are rebalanced continuously, i.e., \(\mathcal{T}=[t_{0},T]\). There is no transaction cost and no leverage limit. Furthermore, we assume that trading continues in the event of insolvency, i.e., when \(W(t)<0\) for some \(t\in[t_{0},T]\). Finally, we assume both portfolios receive constant cash injections with an injection rate of \(c\), which means during any time interval \([t,t+\Delta t]\subseteq[t_{0},T],\forall\Delta t>0\), both portfolios receive cash injection amount of \(c\Delta t\)._ **Remark 2.1**.: (Remark on Assumption 2.1) For illustration purposes, we assume only two underlying assets. However, the technique for deriving the closed-form solution can be extended to multiple assets. We remark that unlimited leverage is unrealistic, and is only assumed for deriving the closed-form solution. In Section C.1, we will discuss the technique for handling the leverage constraint in more detail. We also acknowledge that it is not realistic to assume that the manager can continue to trade and borrow when insolvent. However, this assumption is typically required for obtaining closed-form solutions, see Zhou and Li (2000); Li and Ng (2000) for the case of a multi-period mean-variance asset allocation problem. In Appendix C.1, we have more discussion on the impact of insolvency and its handling in experiments. **Assumption 2.2**.: _(Fixed-mix benchmark strategy) We assume that the benchmark strategy is a fixed-mix strategy (also known as the constant weight strategy). We assume the benchmark always allocates a constant fraction of \(\hat{\varrho}\left(\in\mathbb{R}\right)\) of the portfolio wealth to the stock index, and a constant fraction of \(1-\hat{\varrho}\) to the bond index. Let \(\hat{\varrho}=(\hat{\varrho},1-\hat{\varrho})^{\top}\in\mathbb{R}^{2}\) denote the vector of allocation fractions to the stock index and the bond index, the benchmark strategy is the fixed-mix strategy defined by \(\hat{\mathcal{P}}=\{\hat{\boldsymbol{p}}(\hat{\boldsymbol{X}}(t))=\big{(}\hat {p}_{1}(\hat{\boldsymbol{X}}(t)),\hat{p}_{2}(\hat{\boldsymbol{X}}(t))\big{)}^{ \top}\equiv\hat{\boldsymbol{\varrho}},\;\forall t\in\mathcal{T}\}\)._ Finally, we assume the stock index price and bond index price follow the jump-diffusion processes described below. **Assumption 2.3**.: _(Jump-diffusion processes) Let \(S_{1}(t)\) and \(S_{2}(t)\) denote the deflated (adjusted by inflation) price of the stock index and the bond index at time \(t\in[t_{0},T]\). We assume \(S_{i}(t),\;i\in\{1,2\}\) follow the jump-diffusion processes_ \[\frac{dS_{i}(t)}{S_{i}(t^{-})}=(\mu_{i}-\lambda_{i}\kappa_{i}+r_{i}\cdot \boldsymbol{1}_{S_{i}(t^{-})<0})dt+\sigma_{i}dZ_{i}(t)+d\Big{(}\sum_{k=1}^{ \pi_{i}(t)}(\xi_{i}^{(k)}-1)\Big{)},\;i=1,2. \tag{2.15}\] _Here \(\mu_{i}\) are the (uncompensated) drift rate, \(\sigma_{i}\) is the diffusive volatility, \(Z_{1}(t),Z_{2}(t)\) are correlated Brownian motions, where \(\mathbb{E}[dZ_{1}(t)\cdot dZ_{2}(t)]=\rho dt\). \(r_{i}\) are the borrowing premiums when \(S_{i}(t^{-})\) is negative.4\(\pi_{i}(t)\) is a Poisson process with positive intensity parameter \(\lambda_{i}\). \(\{\xi_{i}^{(k)},\;k=1,\cdots,\pi_{i}(t)\}\) are i.i.d. positive random variables that describe jump multipliers associated with the assets. If a jump occurs for asset \(i\) at time \(t\in(t_{0},T]\), its underlying price jumps from \(S_{i}(t^{-})\) to \(S_{i}(t)=\xi_{i}\cdot S_{i}(t^{-})\).5\(\kappa_{i}=\mathbb{E}[\xi_{i}-1]\). \(\xi_{i}\) and \(\pi_{i}(t)\) are independent of each other. Moreover, \(\pi_{1}(t)\) and \(\pi_{2}(t)\) are assumed to be independent.6_ Footnote 4: Intuitively, there is a premium for shorting an asset. In the closed-form solution derivation, we assume \(r_{i}=0\). Footnote 5: For any functional \(\psi(t)\), we use the notation \(\psi(t^{-})\) as shorthand for the left-sided limit \(\psi(t^{-})=\lim_{\Delta t\downarrow 0}\psi(t-\Delta t)\). Footnote 6: See Forsyth (2020) for the discussion on the empirical evidence for stock-bond jump independence. Also note that the assumption of independent jumps can be relaxed without technical difficulty if needed (Kou, 2002), but will significantly increase the complexity of notations. **Remark 2.2**.: (Motivation for jump-diffusion model) The assumption of stock index price following a jump-diffusion model is common in the financial mathematics literature (Merton, 1976; Kou, 2002). In addition, we follow the practitioner approach and directly model the returns of the constant maturity bond index as a stochastic process, see for example Lin et al. (2015); MacMinn et al. (2014). As in MacMinn et al. (2014), we also assume that the constant maturity bond index follows a jump-diffusion process. During high-inflation regimes, central banks often make rate hikes to curb inflation, which causes sudden jumps in bond prices (Lahaye et al., 2011). We believe this is an appropriate assumption for bonds in high-inflation regimes. Under the jump-diffusion model (2.15), the wealth processes for the active portfolio and benchmark portfolio are \[\left\{\begin{aligned} & dW(t)=\Big{(}\sum\limits_{i=1}^{N_{a}}p_{i}( \boldsymbol{X}(t^{-}))\cdot\frac{dS_{i}(t)}{S_{i}(t^{-})}\Big{)}W(t_{j}^{-}) +cdt,\\ & d\hat{W}(t)=\Big{(}\sum\limits_{i=1}^{N_{a}}\hat{p}_{i}(\hat{ \boldsymbol{X}}(t^{-}))\cdot\frac{dS_{i}(t)}{S_{i}(t^{-})}\Big{)}\hat{W}(t_{j} ^{-})+cdt,\end{aligned}\right. \tag{2.16}\] where \(t\in(t_{0},T]\), \(W(t_{0})=\hat{W}(t_{0})=w_{0}\) and \(X(t^{-})=(t,W(t^{-}),\hat{W}(t^{-}))^{\top}\in\mathbb{R}^{3}\) is the state variable vector. We now derive the closed-form solution of the CD problem (2.11) under Assumption 2.1, 2.2 and 2.3. We first present the verification theorem for the HJB integro-differential equation (PIDE) satisfied by the value function and the optimal control of the CD problem (2.11). **Theorem 2.1**.: _(Verification theorem for CD problem (2.11) For a fixed \(\beta>0\), assume that for all \((t,w,\hat{w},\hat{\varrho})\in[t_{0},T]\times\mathbb{R}^{3}\), there exists a function \(V(t,w,\hat{w},\hat{\varrho}):[t_{0},T]\times\mathbb{R}^{3}\mapsto\mathbb{R}\) and \(p^{\star}(t,w,\hat{w},\hat{\varrho}):[t_{0},T]\times\mathbb{R}^{3}\mapsto \mathbb{R}^{2}\) that satisfy the following two properties. (i) \(V\) and \(\boldsymbol{p}^{\star}\) are sufficiently smooth and solve the HJB PIDE (2.17), and (ii) the function \(\boldsymbol{p}^{\star}(t,w,\hat{w},\hat{\varrho})\) attains the pointwise infimum in (2.17) below_ \[\left\{\begin{aligned} &\frac{\partial V}{\partial t}+(w-e^{\beta t} \hat{w})^{2}+\inf_{\boldsymbol{p}\in\mathbb{R}^{2}}H(\boldsymbol{p};t,w,\hat{w },\hat{\boldsymbol{\varrho}})=0,\\ & V(T,w,\hat{w},\hat{\varrho})=0,\end{aligned}\right. \tag{2.17}\] _where_ \[H(\mathbf{p};t,w,\hat{w},\hat{\mathbf{\varrho}})= \big{(}w\cdot\mathbf{\alpha}^{\top}\mathbf{p}+c\big{)}\cdot\frac{\partial V }{\partial w}+\big{(}\hat{w}\cdot\mathbf{\alpha}^{\top}\hat{\mathbf{\varrho}}+c\big{)} \cdot\frac{\partial V}{\partial\hat{w}}-\Big{(}\sum_{i}\lambda_{i}\Big{)} \cdot V(t,w,\hat{w},\hat{\varrho})\] \[+\frac{w^{2}}{2}\cdot\big{(}\mathbf{p}^{\top}\mathbf{\Sigma}\mathbf{p}\big{)} \cdot\frac{\partial^{2}V}{\partial w^{2}}+\frac{\hat{w}^{2}}{2}\cdot\big{(} \hat{\mathbf{\varrho}}^{\top}\mathbf{\Sigma}\hat{\mathbf{\varrho}}\big{)}\cdot\frac{ \partial^{2}V}{\partial\hat{w}^{2}}+w\hat{w}\cdot\big{(}\mathbf{p}^{\top}\mathbf{ \Sigma}\hat{\mathbf{\varrho}}\big{)}\cdot\frac{\partial^{2}V}{\partial w\hat{w}}\] \[+\sum_{i}\lambda_{i}\int_{0}^{\infty}V(w+p_{i}w(\xi-1),\hat{w}+ \hat{p}_{i}\hat{w}(\xi-1),t,\hat{\varrho})f_{\xi_{i}}(\xi)d\xi. \tag{2.18}\] _Here \(\mathbf{\alpha}=(\mu_{1}-\lambda_{1}\kappa_{1},\mu_{2}-\lambda_{2}\kappa_{2})^{\top}\) is the vector of (compensated) drift rates, \(\mathbf{\Sigma}=\begin{bmatrix}\sigma_{1}^{2}&\rho\sigma_{1}\sigma_{2}\\ \rho\sigma_{1}\sigma_{2}&\sigma_{2}^{2}\end{bmatrix}\) is the covariance matrix, and \(f_{\xi_{i}}\) is the density function for \(\xi_{i}\)._ _Then, under Assumption 2.1, 2.2 and 2.3, \(V\) is the value function and \(\mathbf{p}^{*}\) is the optimal control for the CD problem (2.11)._ Proof.: See Appendix A.1 Define several auxiliary variables \[\left\{\begin{aligned} &\kappa_{i}^{(2)}=\mathbb{E}\big{[}(\xi_{i}-1)^ {2}\big{]},\quad(\sigma_{i}^{(2)})^{2}=(\sigma_{i})^{2}+\lambda_{i}\kappa_{i} ^{(2)},\;i\in\{1,2\},\\ &\vartheta=\sigma_{1}\sigma_{2}\rho-(\sigma_{2}^{(2)})^{2},\quad \gamma=(\sigma_{1}^{(2)})^{2}+(\sigma_{2}^{(2)})^{2}-2\sigma_{1}\sigma_{2} \rho,\\ &\phi=\frac{(\mu_{1}-\mu_{2})(\mu_{1}-\mu_{2}+\vartheta)}{\gamma}, \quad\eta=\frac{(\mu_{1}-\mu_{2}+\vartheta)^{2}}{\gamma}-(\sigma_{2}^{(2)})^ {2},\end{aligned}\right. \tag{2.19}\] then we have the following proposition regarding the optimal control of problem (2.11). **Proposition 2.1**.: _(CD-optimal control) Suppose Assumption 2.1, 2.2 and 2.3 are applicable, then the optimal control fraction of the wealth of the active portfolio to be invested in the stock index for the \(CD(\beta)\) problem (2.11) is given by \(p^{*}(t,w,\hat{w},\hat{\varrho})\in\mathbb{R}\), where_ \[p^{*}(t,w,\hat{w},\hat{\varrho})=\frac{1}{W^{*}(t)}\Bigg{[}\frac{(\mu_{1}-\mu_ {2})}{\gamma}h(t;\beta,c)+\frac{(\mu_{1}-\mu_{2}+\vartheta)}{\gamma}\Big{(}g( t;\beta)\hat{W}(t)-W^{*}(t)\Big{)}+g(t;\beta)\hat{W}(t)\cdot\hat{\varrho} \Bigg{]}. \tag{2.20}\] _Here \(W^{*}(t)\) denotes the wealth process of the active portfolio from (2.6) following control \(\mathbf{p}^{*}(t,W^{*}(t),\hat{W}(t),\hat{\varrho})=\Big{(}p^{*}(t,W^{*}(t),\hat{ W}(t),\hat{\varrho}),1-p^{*}(t,W^{*}(t),\hat{W}(t),\hat{\varrho})\Big{)}^{\top}\), where \(p^{*}\) is the optimal stock allocation described in (2.20), and \(\hat{W}(t)\) is the wealth process of the benchmark portfolio following the fixed-mixed strategy described in Assumption 2.2. Here, \(h\) and \(g\) are deterministic functions of time,_ \[g(t;\beta)=-\frac{D(t;\beta)}{2A(t)},\qquad h(t;\beta,c)=-\frac{B(t;\beta,c)}{ 2A(t)}, \tag{2.21}\] _where \(A,D\) and \(B\) are deterministic functions defined as_ \[A(t)=\frac{e^{(2\mu_{2}-\eta)(T-t)}-1}{(2\mu_{2}-\eta)},\qquad D(t;\beta)=2e^{ \beta T}\Big{(}\frac{e^{-\beta(T-t)}-e^{(2\mu_{2}-\eta)(T-t)}}{2\mu_{2}-\eta+ \beta}\Big{)}, \tag{2.22}\] _and_ \[B(t;\beta,c) =\frac{2c}{2\mu_{2}-\eta}\Big{(}\frac{e^{(2\mu_{2}-\eta)(T-t)}-e^ {(\mu_{2}-\phi)(T-t)}}{\mu_{2}+\phi-\eta}-\frac{e^{(\mu_{2}-\phi)(T-t)}-1}{\mu _{2}-\phi}\Big{)}\] \[+\frac{2ce^{\beta T}}{2\mu_{2}-\eta+\beta}\Big{(}\frac{e^{(\mu_{2} -\phi)(T-t)}-e^{-\beta(T-t)}}{\mu_{2}-\phi+\beta}-\frac{e^{(2\mu_{2}-\eta)(T-t )}-e^{(\mu_{2}-\phi)(T-t)}}{\mu_{2}+\phi-\eta}\Big{)}. \tag{2.23}\] Proof.: See Appendix A.2. #### 2.4.1 Insights from CD-optimal control The CD-optimal control (2.20) provides insights into the behaviour of the optimal allocation policy. For ease of exposition, we first establish the following properties of \(g(t;\beta)\) and \(h(t;\beta,c)\). **Corollary 2.1**.: _(Properties of \(g(t;\beta)\)) The function \(g(t;\beta)\) defined in (2.21) has the following properties for \(t\in[t_{0},T]\) and \(\beta>0\):_ 1. _For fixed_ \(t\in[t_{0},T]\)_,_ \(g(t;\beta)\) _is strictly increasing on_ \(\beta\in(0,\infty)\)_._ 2. _For fixed_ \(\beta>0\)_,_ \(g(t;\beta)\) _is strictly increasing on_ \(t\in[t_{0},T]\)_._ 3. \(g(t;\beta)\) _admits the following bounds:_ \[e^{\beta t}\leq g(t;\beta)\leq e^{\beta T}.\] (2.24) Proof.: See Appendix A.3. **Corollary 2.2**.: _(Properties of \(h(t;\beta,c)\)) The function \(h(t;\beta,c)\) defined in (2.21) has the following properties for \(t\in[t_{0},T]\), \(\beta>0\) and \(c\geq 0\):_ 1. _For fixed_ \(t\in[t_{0},T]\) _and_ \(c>0\)_,_ \(h(t;\beta,c)\) _is strictly increasing on_ \(\beta\in(0,\infty)\)_._ 2. \(h(t;\beta,c)\geq 0\)_,_ \(\forall(t,\beta,c)\in[t_{0},T]\times(0,\infty)\times[0,\infty)\)_._ 3. _For fixed_ \(t\in[t_{0},T]\) _and_ \(\beta>0\)_,_ \(h(t;\beta,c)\) _is strictly increasing on_ \(c\in[0,\infty)\)_._ \(h(t;\beta,0)\equiv 0\)_. Moreover,_ \(h(t;\beta,c)\propto c\)_, i.e._ \(h(t;\beta,c)\) _is proportional to_ \(c\)_._ Proof.: See Appendix A.3. In order to analyze the closed-form solution, we make the following assumptions. **Assumption 2.4**.: _(Drift rates of the two assets) We assume that the drift rates of the stock and the bond index \(\mu_{1}\) and \(\mu_{2}\) satisfy the following properties,_ \[\mu_{1}-\mu_{2}>0,\quad\mu_{1}-\mu_{2}+\vartheta>0, \tag{2.25}\] _where \(\vartheta\) is defined in (2.19)._ **Remark 2.3**.: (Remark on drift rate assumptions) The first inequality \(\mu_{1}-\mu_{2}>0\) indicates that the stock index has a higher drift rate than the bond index, which is a standard assumption.7 The second inequality \(\mu_{1}-\mu_{2}+\vartheta>0\) is also practically reasonable. \(\vartheta\) is a variance term that is usually on a smaller scale compared to the drift rates. In reality, it is unlikely that \(\mu_{1}-\mu_{2}>0\) but \(\mu_{1}-\mu_{2}+\vartheta\leq 0\).8 Footnote 7: In fact, in this two-asset case, this assumption does not cause loss of generality. Footnote 8: For reference, based on the calibrated jump-diffusion model (2.15) on historical high-inflation regimes, \(\mu_{1}=0.051,\mu_{2}=-0.014,\vartheta=-0.00024\), and thus both inequalities are satisfied. Now we proceed to summarize the insights from the CD-optimal control (2.20). The first obvious observation is that the CD-optimal control is a contrarian strategy. This can be seen from the fact that fixing time and the wealth of the benchmark portfolio \(\hat{W}(t)\), the allocation to the more risky stock index decreases when the wealth of the active portfolio \(W^{*}(t)\) increases. If we take a deeper look at (2.20), we can see that the CD-optimal control consists of two components: a cash injection component \(p^{*}_{cash}\) and a tracking component \(p^{*}_{track}\). Mathematically, \[p^{*}(t,w,\hat{w},\hat{\varrho})=p^{*}_{cash}(t,w,\hat{w})+p^{*}_{track}(t,w, \hat{w},\hat{\varrho}), \tag{2.26}\] where \[\left\{\begin{aligned} & p_{cash}^{*}(t,w,\hat{w})=\frac{1}{W^{*}(t)} \Bigg{[}\frac{(\mu_{1}-\mu_{2})}{\gamma}h(t;\beta,c)\Bigg{]},\\ & p_{track}^{*}(t,w,\hat{w},\hat{\varrho})=\frac{1}{W^{*}(t)} \Bigg{[}\frac{(\mu_{1}-\mu_{2}+\hat{\vartheta})}{\gamma}\Big{(}g(t;\beta)\hat {W}(t)-W^{*}(t)\Big{)}+g(t;\beta)\hat{W}(t)\cdot\hat{\varrho}\Bigg{]}.\end{aligned}\right. \tag{2.27}\] Based on Assumption 2.4 and Corollary 2.2, the cash injection component \(p_{cash}\) is always non-negative. Furthermore, from Corollary 2.2, we know that the stock allocation from the cash injection component is proportional to the cash injection rate \(c\). In addition, as \(t\uparrow T\), \(h(t;\beta,c)\) increases, and thus the stock allocation from the cash injection component also increases with time. On the other hand, the tracking component \(p_{track}\) does not depend on the cash injection rate \(c\), but only concerns the tracking performance of the active portfolio. One key finding is that \[\left\{\begin{aligned} p_{track}^{*}(t,w,\hat{w},\hat{ \varrho})\geq\hat{\varrho},&\text{ if }W^{*}(t)\leq g(t;\beta)\hat{W}(t),\\ p_{track}^{*}(t,w,\hat{w},\hat{\varrho})<\hat{\varrho},& \text{ if }W^{*}(t)>g(t;\beta)\hat{W}(t).\end{aligned}\right. \tag{2.28}\] This means that the CD-optimal control uses \(g(t;\beta)\hat{W}(t)\) as the true target for the active portfolio to decide if the active portfolio should take more or less risk than the benchmark portfolio. This is a key observation, since the CD objective function (2.11) measures the difference between \(W(t)\) and \(e^{\beta t}\hat{W}(t)\). One would naively think that the optimal strategy would be based on the deviation from \(e^{\beta t}\hat{W}(t)\). In contrast, from Corollary 2.1, we know that the true target \(g(t;\beta)\hat{W}(t)\) used for decision making is greater than \(e^{\beta t}\hat{W}(t)\). The insight from this observation is that if the manager wants to track an elevated target \(e^{\beta t}\hat{W}(t)\), she should aim higher than the target itself. ### Leverage constraints In practice, large pension funds such as the Canadian Pension Plan often have exposures to alternative assets, such as private equity (CPP Investments, 2022). Unfortunately, due to practical limitations, we only have access to long-term historical returns of publicly traded stock indexes and treasury bond indexes. Although controversial, some literature suggests that returns on private equity can be replicated using a leveraged small-cap stock index (Phalippou, 2014; L'Her et al., 2016). Following this line of argument, we allow managers to take leverage to invest in public stock index funds to roughly mimic the pension fund portfolios with some exposure to private equities. Essentially, taking leverage to invest in stocks requires borrowing additional capital, which incurs borrowing costs. For simplicity, we assume the borrowing activity is represented by shorting some bond assets within the portfolio, and thus the manager is required to pay the cost of shorting these shortable assets. We assume that the cost consists of two parts: the returns of the shorted assets, and an additional borrowing premium (rate depends on specific investment scenarios) so that the total borrowing cost reflects both the interest rate environment (the return of shorted bond assets) and is reasonably estimated (with the added borrowing premium). Following the notation from Section 2.2, we assume that the total \(N_{a}\) underlying assets are divided into two groups. The first group of \(N_{l}\) assets are long-only assets, which we index by the set \(\{1,\cdots,N_{l}\}\). The second group of \(N_{a}-N_{l}\) assets are shortable assets that can be shorted to create leverage and are indexed by the set \(\{N_{l}+1,\cdots,N_{a}\}\). Recall the notation of \(p_{i}(\mathbf{X}(t))\) for the allocation fraction for asset \(i\) at time \(t\). For long-only assets, the wealth fraction needs to be non-negative, hence we have \[\text{(Long-only constraint):}\quad p_{i}(\mathbf{X}(t))\geq 0,\;i\in\{1,\cdots,N_{l} \},\;t\in\mathcal{T}. \tag{2.29}\] Furthermore, the total allocation fraction for all assets should be one. Therefore, the following summation constraint needs to be satisfied \[\text{(Summation constraint):}\quad\sum_{i=1}^{N_{a}}p_{i}(\mathbf{X}(t))=1, \;t\in\mathcal{T}. \tag{2.30}\] In practice, due to borrowing costs (from taking leverage) and risk management mandates, the use of leverage is often constrained. For this reason, we cap the maximum leverage by introducing a constant \(p_{max}\), which represents the total allocation fraction for long-only assets. Therefore, \[\text{(Maximum leverage constraint):}\quad\sum_{i=1}^{N_{l}}p_{i}(\mathbf{X}(t)) \leq p_{max},\;t\in\mathcal{T}. \tag{2.31}\] Note that no leverage is permitted if \(p_{max}=1\). Finally, we make the following assumption on the scenario of shorting multiple shortable assets. **Assumption 2.5**.: _(Simultaneous shorting) If one shortable asset has a negative weight, other shortable assets must have nonpositive weights. Mathematically, this assumption can be expressed as_ \[\text{(Simultaneous shorting constraint):}\left\{\begin{array}{l}p_{i}(\mathbf{X}(t)) \leq 0,\;\forall i\in\{N_{l}+1,\cdots,N_{a}\},\;\text{if}\;\sum_{i=1}^{N_{l}}p_{i }(\mathbf{X}(t))>1,\;t\in\mathcal{T}\\ p_{i}(\mathbf{X}(t))\geq 0,\;\forall i\in\{N_{l}+1,\cdots,N_{a}\},\;\text{if}\;\sum_{i=1}^{N_ {l}}p_{i}(\mathbf{X}(t))\leq 1,\;t\in\mathcal{T}\end{array}\right.\;. \tag{2.32}\] **Remark 2.4**.: (Remark on Assumption 2.5) This assumption avoids the ambiguity between the long-only assets and shortable assets in scenarios that involve leverage. When leveraging occurs, all shortable assets are treated as one group to provide the needed liquidity to achieve the desired leverage level. The above constraints consider scenarios with non-negative portfolio wealth. Before we proceed to the handling of the negative portfolio wealth scenarios, we first define the following partition of the state space \(\mathcal{X}\), **Definition 2.1**.: _(Partition of state space) We define \(\left\{\mathcal{X}_{1},\mathcal{X}_{2}\right\}\) to be a partition of the state space \(\mathcal{X}\), such that_ \[\left\{\begin{array}{l}\mathcal{X}_{1}=\left\{x=(t,W,\hat{W})^{\top}\in \mathcal{X}\middle|\!\!\!W\geq 0\right\}\!,\\ \mathcal{X}_{2}=\left\{x=(t,W,\hat{W})^{\top}\in\mathcal{X}\middle|\!\!\!W<0 \right\}\!.\end{array}\right. \tag{2.33}\] Intuitively, we separate the state space \(\mathcal{X}\) into two regions by the wealth of the active portfolio, one with non-negative wealth and the other with negative wealth. Then, we present the following assumption concerning the negative wealth (insolvency) scenarios. **Assumption 2.6**.: _(No trading in insolvency) If the wealth of the active portfolio is negative, then all long-only asset positions should be liquidated, and all the debt (i.e. the negative wealth) is allocated to the least-risky shortable asset (in terms of volatility). Particularly, without loss of generality, we assume all debt is allocated to asset \(N_{l}+1\). Let \(\mathbf{e}_{i}\in\mathbb{R}^{N_{a}}=(0,\cdots,0,1,0,\cdots,0)^{\top}\) denote the standard basis vector of which the \(i\)-th entry is 1 and all other entries are 0. Then, we can formulate this assumption as follows._ \[\text{(No trading in insolvency):}\quad p(\mathbf{X}(t))=\mathbf{e}_{N_{l}+1},\quad \text{if}\;\mathbf{X}(t)\in\mathcal{X}_{2}. \tag{2.34}\] **Remark 2.5**.: (Remark on Assumption 2.6) Essentially, when the portfolio wealth is negative, we assume the debt is allocated to a short-term bond asset and accumulates over time. Summarizing the constraints, we can define two sets \(\mathcal{Z}_{1},\mathcal{Z}_{2}\): \[\left\{\begin{array}{l}Z_{1}=\left\{\mathbf{z}\in\mathbb{R}^{N_{a}}\middle|\! \!\!\begin{array}{l}\left\{\begin{array}{l}z_{i}\geq 0,\forall i\in\{1, \cdots,N_{l}\},\\ \sum_{i=1}^{N_{a}}z_{i}=1,\\ \sum_{i=1}^{N_{l}}z_{i}\leq p_{max},\\ z_{i}\leq 0,\;\forall i\in\{N_{l}+1,\cdots,N_{a}\},\;\text{if}\;\sum_{i=1}^{N_ {l}}z_{i}>1,\\ z_{i}\geq 0,\;\forall i\in\{N_{l}+1,\cdots,N_{a}\},\;\text{if}\;\sum_{i=1}^{N_ {l}}z_{i}\leq 1\end{array}\right.\end{array}\right\}\!,\\ \mathcal{Z}_{2}=\left\{\mathbf{e}_{N_{l}+1}\right\}\!, \tag{2.35}\] Then, the corresponding space of feasible control vector values \(\mathcal{Z}\) and the admissible strategy set \(\mathcal{A}\) are \[(\text{Admissible set}):\quad\left\{\begin{aligned} \mathcal{Z}&=\mathcal{Z}_{1}\cup \mathcal{Z}_{2},\\ \mathcal{A}&=\left\{\mathcal{P}=\left\{\boldsymbol{p}( \boldsymbol{X}(t)),\;t\in\mathcal{T}\right|\left\{\begin{aligned} \boldsymbol{p}(\boldsymbol{X}(t))\in\mathcal{Z}_{1}, \;\text{if}\;\boldsymbol{X}(t)\in\mathcal{X}_{1},\\ \boldsymbol{p}(\boldsymbol{X}(t))\in\mathcal{Z}_{2},\;\text{if}\; \boldsymbol{X}(t)\in\mathcal{X}_{2},\end{aligned}\right.\quad\right\} \end{aligned}\right\}\!. \tag{2.37}\] It is not obvious how the conditional constraints in (2.37) and (2.38) can be formulated into a standard constrained optimization problem. ### Neural network method In Section 2.4, we derive the closed-form solution under the jump-diffusion model, which requires several unrealistic assumptions such as continuous rebalancing, unlimited leverage, and trading in insolvency. Furthermore, the closed-form solution is specific to the investment objective defined in the CD problem (2.11). To discover optimal strategies for high inflation regimes, capability in solving general investment problem (2.7) for different objectives and under realistic constraints, such as discrete rebalancing and limited leverage (i.e., leverage constraints discussed in Section 2.5), is critically beneficial. Therefore, we need computationally efficient methods to solve these problems numerically, particularly in high-dimensional cases. Solving a discrete-time multi-period optimal asset allocation problem often utilizes dynamic programming (DP). For example, Dixon et al. (2020); Park et al. (2020); Lucarelli and Borrotti (2020); Gao et al. (2020) use Q-learning algorithms to solve the discrete-time multi-period optimal allocation problem. In general, if there are \(N_{a}\) assets to invest in, then the use of Q-learning involves approximation of an action-value function ("Q" function) which is a \((2N_{a}+1)\)-dimensional function (van Staden et al., 2023) which represents the conditional expectation of the cumulative rewards at an intermediate state.9 Meanwhile, the optimal control is a mapping from the state space to the allocation fractions to the assets. If the state space is relatively low-dimensional, 10 then the DP-based approaches are potentially unnecessarily high-dimensional. Footnote 9: Intuitively, the dimensionality comes from tracking the allocation in the \(N_{a}\) assets for both the active portfolio and benchmark portfolio when evaluating the changes in wealth of both portfolios over one period in the action-value function. Footnote 10: For example, the state space of problem (2.11) with assumptions of a fixed-mix strategy is a vector in \(\mathbb{R}^{3}\). Instead of using dynamic programming methods, Han et al. (2016); Buehler et al. (2019); Tsang and Wong (2020); Reppen et al. (2022) propose to approximate the optimal control function by neural network functions directly. In particular, they propose a stacked neural network approach that essentially uses a sub-network to approximate the control at every rebalancing step. Therefore, the number of neural networks required grows linearly with the number of rebalancing periods. Note that, in the taxonomy of Powell (2023), this method is termed as Policy Function Approximation (PFA). In this article, we follow the lines of Li and Forsyth (2019); Ni et al. (2022) and propose a single neural network to approximate the optimal control function. The direct representation of the control function avoids the high-dimensional approximation required in DP-based methods. In addition, we consider time \(t\) as an input feature (along with the wealth of the active portfolio and benchmark portfolio), therefore avoiding the need for multiple sub-networks in the stacked neural network approach. The numerical solution to the general problem (2.7) requires solving for the feedback control \(\boldsymbol{p}\). We approximate the control function \(\boldsymbol{\theta}\) by a neural network function \(f(\boldsymbol{X}(t);\boldsymbol{\theta}):\mathcal{X}\mapsto\mathbb{R}^{N_{a}}\), where \(\boldsymbol{\theta}\in\mathbb{R}^{N_{\boldsymbol{\theta}}}\) represents the parameters of the neural network (i.e., weights and biases). In other words, \[\boldsymbol{p}(\boldsymbol{X}(t))\simeq f(\boldsymbol{X}(t);\boldsymbol{\theta })\equiv f(\cdot;\boldsymbol{\theta}). \tag{2.39}\] Then, the optimization problem (2.7) can be converted to solving the following optimization problem. \[(\text{Parameterized optimization problem}):\quad\inf_{\boldsymbol{\theta}\in \mathcal{Z}_{\boldsymbol{\theta}}}\mathbb{E}_{f(\cdot;\boldsymbol{\theta})}^{ (t_{0},w_{0})}\big{[}F(\mathcal{W}_{\boldsymbol{\theta}},\hat{\mathcal{W}}_{ \boldsymbol{\hat{p}}})\big{]}. \tag{2.40}\] Here \(\mathcal{W}_{\boldsymbol{\theta}}\) is the wealth trajectory of the active portfolio following the neural network approximation function parameterized by \(\theta\). \(\mathcal{Z}_{\boldsymbol{\theta}}\subseteq\mathbb{R}^{N_{\boldsymbol{\theta}}}\) is the feasibility domain of the parameter \(\boldsymbol{\theta}\), which is translated from the constraints of the original problem, e.g., (2.37) and (2.38). Mathematically, \[\mathcal{Z}_{\mathbf{\theta}}=\Bigg{\{}\mathbf{\theta}:\left\{\begin{aligned} f( \mathbf{X};\mathbf{\theta})&\in\mathcal{Z}_{1},\;\text{if}\;\mathbf{X}\in \mathcal{X}_{1},\\ f(\mathbf{X};\mathbf{\theta})&\in\mathcal{Z}_{2},\;\text{if}\;\mathbf{X} \in\mathcal{X}_{2}.\end{aligned}\right\}. \tag{2.41}\] Here \(\mathcal{Z}_{1},\mathcal{Z}_{2}\) are defined in (2.35), (2.36) and \(\mathcal{X}_{1},\mathcal{X}_{2}\) are partitions of the state space \(\mathcal{X}\) defined in Definition 2.1. Note here that \(\mathcal{Z}_{\mathbf{\theta}}\) depends on the structure of the neural network function \(f(\cdot;\mathbf{\theta})\). Intuitively, \(\mathcal{Z}_{\mathbf{\theta}}\) is the preimage of \(\mathcal{Z}\), i.e., any \(\theta\in\mathcal{Z}_{\mathbf{\theta}}\), \(f(\cdot;\mathbf{\theta})\in\mathcal{Z}\). Specific neural network model design may result in \(\mathcal{Z}_{\mathbf{\theta}}=\mathbb{R}^{N_{\mathbf{\theta}}}\), which means (2.40) becomes an unconstrained optimization problem. For long-only investment problems, the only constraints are the long-only constraint (2.29) and the summation constraint (2.30). Previous work has proposed a neural network architecture with a softmax activation function at the last layer so that the output (vector of allocation fractions) automatically satisfies the two constraints, and thus \(\mathcal{Z}_{\mathbf{\theta}}=\mathbb{R}^{N_{\mathbf{\theta}}}\) and problem (2.40) becomes an unconstrained optimization problem (see, e.g., Li and Forsyth (2019); Ni et al. (2022)). However, as discussed in Section 2.5, we consider the more complicated case where leverage and shorting are allowed. The problem thus involves more constraints than the long-only case and therefore we would like to design a new model architecture to convert the constrained optimization problem to an unconstrained problem. We will discuss the design of the _leverage-feasible neural network_ (LFNN) model in the next section, and how the LFNN model achieves this goal. It is worth noting that for the particular CD problem (2.12) and CS problem (2.14), our technique may be formulated to appear similar to policy gradient methods in RL literature (Silver et al., 2014) on a high level. Examples of policy gradient methods in financial problems include Coache and Jaimungal (2021), in which the authors develop an actor-critic algorithm for portfolio optimization problems with convex risk measures. However, there are two main differences between our proposed methodology and policy gradient algorithms. Firstly, we assume that the randomness of the environment (i.e., asset returns) over the entire investment horizon is readily available upfront (e.g., through calibration of parametric models or resampling of historical data), which is a common assumption adopted by practitioners when backtesting investment strategies. On the other hand, RL literature often considers an unknown environment, and the algorithms focus on the exploration of the agent to learn from the unknown environment and thus may be unnecessarily complicated for our use case. Secondly, our proposed methodology is not limited to the cumulative reward framework in RL and thus is more universal and suitable for problems in which the investment objective cannot be easily expressed in the form of a cumulative reward. ### Leverage-feasible neural network (LFNN) In this section, we propose the leverage-feasible neural network (LFNN) model, which yields \(\mathcal{Z}_{\mathbf{\theta}}=\mathbb{R}^{N_{\mathbf{\theta}}}\) for leverage constraints defined in equation (2.37), and converts a constrained optimization problem (2.40) to an unconstrained problem. Let vector \(\mathbf{x}=(t,W(t),\tilde{W}(t))^{\top}\in\mathcal{X}\) be the feature (input) vector. We first define a standard fully-connected feedforward neural network (FNN) function \(\tilde{f}:\mathcal{X}\mapsto\mathbb{R}^{N_{a}+1}\) as follows: \[\text{(FNN)}:\ \ \left\{\begin{aligned} h_{j}^{(1)}=\text{Sigmoid }\Big{(}\sum_{i=1}^{N_{x}}x_{i}\theta_{ij}^{(1)}+b_{j}^{(1)}\Big{)},\;j=1, \cdots,N_{h}^{(1)},\\ h_{j}^{(k)}=\text{Sigmoid}\Big{(}\sum_{i=1}^{N_{h}^{(k-1)}}h_{i}^{(k- 1)}\theta_{ij}^{(k)}+b_{j}^{(k)}\Big{)},\;j=1,\cdots,N_{h}^{(k)},\;\forall k \in\{2,\cdots,K\},\\ o_{j}=\sum_{i=1}^{N_{h}^{(K)}}h_{i}\theta_{ij}^{(K+1)},\;j=1, \cdots,N_{a}+1,\\ \tilde{f}(\mathbf{x};\mathbf{\theta}):=(o_{1},\cdots,o_{N_{a}+1})^{\top}. \end{aligned}\right. \tag{2.42}\] Here \(\text{Sigmoid}(\cdot)\) denotes the sigmoid activation function, \(K\) denotes the number of hidden layers, \(h_{j}^{k}\) denotes the value of the \(j\)-th node in the \(k\)-th hidden layer, and \(N_{h}^{(k)}\) is the number of nodes in the \(k\)-th hidden layer. Additionally, \(\mathbf{\theta}^{(k)}=(\theta_{ij}^{(k)})\in\mathbb{R}^{N_{h}^{(k)}\times N_{h}^{(k -1)}}\) and \(\mathbf{b}^{(k)}=(b_{j}^{(k)})\in\mathbb{R}^{N_{h}^{(k)}}\) are the (vectorized) weight matrix and bias vector for the \(k\)-th layer,11 and the parameter vector of the entire neural network is \((\mathbf{\theta}^{(1)},\mathbf{b}^{(1)},\cdots,\mathbf{\theta}^{(K)},\mathbf{b}^{(K)},\mathbf{\theta}^{ (K+1)})^{\top}\in\mathbb{R}^{N_{\mathbf{\theta}}}\), where \(N_{\mathbf{\theta}}=\sum_{k=1}^{K+1}N_{h}^{(k)}\cdot N_{h}^{(k-1)}+\sum_{k=1}^{K}N_{ h}^{(k)}\). Building on \(\tilde{f}\), we propose the following _leverage-feasible neural network_ (LFNN) model \(f:\mathcal{X}\mapsto\mathcal{Z}\): \[(\text{LFNN}):\quad f(\mathbf{x};\mathbf{\theta}):=\psi\Big{(}\tilde{f}(\mathbf{x};\mathbf{ \theta}),\mathbf{x}\Big{)}\in\mathcal{Z}. \tag{2.43}\] Here, \(\psi(\cdot)\) is the _leverage-feasible activation function_. For \(\mathbf{o}=(o_{1},\cdots,o_{N_{a}+1})^{\top}\in\mathbb{R}^{N_{a}+1}\), and \(\mathbf{p}=\psi(\mathbf{o},\mathbf{x})\), \(\psi(\cdot):(\mathbf{o},\mathbf{x})\in\mathbb{R}^{N_{a}+1}\times\mathcal{X}\mapsto \mathcal{Z}\) is defined by \[\mathbf{p}=\psi(\mathbf{o},\mathbf{x})=\left\{\begin{array}{ll}l=p_{max}\cdot\text{Sigmoid }(o_{N_{a}+1}),\\ p_{i}=l\cdot\frac{e^{o_{i}}}{\sum_{k=1}^{N_{a}}e^{o_{k}}},\;i\in\{1,\cdots,N_{ l}\},&\text{if }\mathbf{x}\in\mathcal{X}_{1},\\ p_{i}=(1-l)\cdot\frac{e^{o_{i}}}{\sum_{k=N_{l}+1}^{N_{a}}e^{o_{k}}},\;i\in\{N_ {l}+1,\cdots,N_{a}\},\\ \mathbf{e}_{N_{l}+1},&\text{if }\mathbf{x}\in\mathcal{X}_{2}.\end{array}\right. \tag{2.44}\] Recall that \(N_{l}\) is the number of long-only assets and \(p_{max}\) is the maximum leverage allowed. We show that the leverage-feasible activation function \(\psi\) has the following property. **Lemma 2.1**.: _(Decomposition of \(\psi\)) The leverage-feasible function \(\psi\) defined in (2.44) has the function decomposition that_ \[\psi(\mathbf{o},\mathbf{x})=\varphi(\zeta(\mathbf{o}),\mathbf{x}), \tag{2.45}\] _where_ \[\left\{\begin{aligned} &\zeta:\mathbb{R}^{N_{a}+1}\mapsto \tilde{\mathcal{Z}},\zeta(o)=\Bigg{(}\text{Softmax}\Big{(}(o_{1},\cdots,o_{N_ {l}})\Big{)},\text{Softmax}\Big{(}(o_{N_{l}+1},\cdots,o_{N_{a}})\Big{)},p_{ max}\cdot\text{Sigmoid}(o_{N_{a}+1})\Bigg{)}^{\top},\\ &\varphi:\tilde{\mathcal{Z}}\times\mathcal{X}\mapsto\mathcal{Z}, \varphi(z)=\Big{(}z_{N_{a}+1}\cdot(z_{1},\cdots,z_{N_{l}}),(1-z_{N_{a}+1}) \cdot(z_{N_{l}+1},\cdots,z_{N_{a}})\Big{)}^{\top}\cdot\mathbf{I_{\mathbf{x}\in \mathcal{X}_{1}}}+\mathbf{e}_{N_{l}+1}\cdot\mathbf{I_{\mathbf{x}\in\mathcal{X}_{2}}},\end{aligned}\right. \tag{2.46}\] _and_ \[\tilde{\mathcal{Z}}=\Bigg{\{}z\in\mathbb{R}^{N_{a}+1},\sum_{i=1}^{N_{l}}z_{i}= 1,\sum_{i=N_{l}+1}^{N_{a}}z_{i}=1,z_{N_{a}+1}\leq p_{max},z_{i}\geq 0,\forall i \Bigg{\}}. \tag{2.47}\] Proof.: This is easily verifiable by definition of \(\psi\) in (2.44). **Remark 2.6**.: (Remark on Lemma 2.1) The leverage-feasible activation function \(\psi\) corresponds to a two-step decision process described by \(\zeta\) and \(\varphi\). Intuitively, \(\zeta\) first determines the internal allocations within long-only assets and shortable assets, as well as the total leverage. Then, \(\varphi\) converts the internal allocations and total leverage into final allocation fractions, which depend on the wealth of the active portfolio. With the LFNN model outlined above, the parameterized optimization problem (2.40) becomes an unconstrained optimization problem. Specifically, we present the following theorem regarding the feasibility domain \(\mathcal{Z}_{\mathbf{\theta}}\) associated with the LFNN model (2.43). **Theorem 2.2**.: _(Unconstrained feasibility domain) The feasibility domain \(\mathcal{Z}_{\mathbf{\theta}}\) defined in (2.41) associated with the LFNN model (2.43) is \(\mathbb{R}^{N_{\mathbf{\theta}}}\)._ Proof.: See Appendix B.1. Following Theorem 2.2, the constrained optimization problem (2.7) can be transformed into the following unconstrained optimization problem \[(\text{Unconstrained parameterized problem}):\quad\inf_{\mathbf{\theta}\in\mathbb{R}^{N_{\mathbf{ \theta}}}}\mathbb{E}_{f(\cdot;\mathbf{\theta})}^{(t_{0},w_{0})}\big{[}F( \mathcal{W}_{\mathbf{\theta}},\hat{\mathcal{W}}_{\hat{\mathcal{P}}})\big{]}. \tag{2.48}\] ### Mathematical justification for LFNN approach By approximating the feasible control with a parameterized LFNN model, we have shown that the original constrained optimization problem is transformed into an unconstrained optimization problem, which is computationally more implementable. However, an important question remains: is the solution to the parameterized unconstrained optimization problem (2.48) capable of yielding the optimal control of the original problem (2.7)? In other words, suppose \(\mathbf{\theta}^{*}\) is the solution to (2.48), can \(f(\cdot;\mathbf{\theta}^{*})\) approximates solution to (2.7) with desired accuracy? In this section, we prove that under benign assumptions and appropriate choices of the hyperparameter of the LFNN model (2.43), solving the unconstrained problem (2.48) provides an arbitrarily close approximation the original problem (2.7). We start by establishing the following lemma. **Lemma 2.2**.: _(Structure of feasible control) Any feasible control function \(p:\mathcal{X}\mapsto\mathcal{Z}\), where \(\mathcal{Z}\) is defined in (2.38), has the function decomposition_ \[p(x)=\varphi(\omega(x),x), \tag{2.49}\] _where \(\varphi:\tilde{\mathcal{Z}}\times\mathcal{X}\mapsto\mathcal{Z}\) is defined in (2.46) and \(\omega:\mathcal{X}\mapsto\tilde{\mathcal{Z}}\)._ Proof.: See Appendix B.2. Next, we propose the following benign assumptions on the state space and the optimal control. **Assumption 2.7**.: _(Assumption on state space and optimal control)_ * _The space_ \(\mathcal{X}\) _of state variables is a compact set._ * _Following Lemma_ 2.2_, the optimal control_ \(p^{*}:\mathcal{X}\mapsto\mathcal{Z}\) _has the decomposition_ \(p^{*}(x)=\varphi(\omega^{*}(x),x)\) _for some_ \(\omega^{*}:\mathcal{X}\mapsto\tilde{\mathcal{Z}}\)_. We assume_ \(\omega^{*}\in C(\mathcal{X},\tilde{\mathcal{Z}})\)_, where_ \(C(\mathcal{X},\tilde{\mathcal{Z}})\) _denotes the set of continuous mappings from_ \(\mathcal{X}\) _to_ \(\tilde{\mathcal{Z}}\)_._ **Remark 2.7**.: (Remark on Assumption 2.7) In our particular problem of outperforming a benchmark portfolio, the state variable vector is \(X(t)=(t,W(t),\hat{W}(t))^{\top}\in\mathcal{X}\) where \(t\in[0,T]\). In this case, assumption (i) is equivalent to the assumption that the wealth of the active portfolio and benchmark portfolio is bounded, i.e. \(\mathcal{X}=[0,T]\times[w_{min},w_{max}]\times[w_{min},\hat{w}_{max}]\), where \(w_{min},w_{max}\) and \(\hat{w}_{min},\hat{w}_{max}\) are the respective wealth bounds for the portfolios. Intuitively, assumption (ii) states that the decision process for the optimal control to obtain the internal allocation fractions within the long-only assets, shortable assets, and the total leverage is a continuous function. This is a natural extension of the long-only case, in which it is commonly assumed that the allocation within long-only assets is a continuous function of state variables. Finally, we present the following theorem. **Theorem 2.3**.: _(Approximation of optimal control) Following Assumption 2.7, \(\forall\epsilon>0\), there exists \(N_{h}\in\mathbb{N}\), and \(\mathbf{\theta}\in\mathbb{R}^{N_{\mathbf{\theta}}}\) such that the corresponding LFNN model \(f(\cdot;\mathbf{\theta})\) described in (2.43) satisfies the following:_ \[\sup_{x\in\mathcal{X}}\|f(x;\mathbf{\theta})-p^{*}(x)\|<\epsilon. \tag{2.50}\] Proof.: See Appendix B.2. Theorem 2.3 shows that given any arbitrarily small tolerance \(\epsilon>0\), there exists a suitable choice of the hyperparameter of the LFNN model (e.g. the number of hidden layers and nodes), and a parameter vector \(\mathbf{\theta}\), such that the corresponding parameterized LFNN function is within this tolerance of the optimal control function.12 In other words, with a large enough LFNN model (in terms of the number of hidden nodes), solving the unconstrained parameterized problem (2.48) approximately solves the original optimization problem (2.7) with any required precision. **Remark 2.8**.: (Empirical evidence of approximation) In practice, we find that a small neural network structure with one single hidden layer and only 10 hidden nodes achieves excellent approximation performance. In particular, in a numerical experiment with simulated data, we compare the LFNN model with the approximate form of the closed-form solution derived in Section 2.4, and find that the LFNN model mimics the closed-form solution very well. This provides further empirical evidence that supports Theorem 2.3. Additional details can be found in Appendix C. ### Training LFNN Since the numerical experiments involve the solution and evaluation of the optimal parameters \(\boldsymbol{\theta}^{*}\) of the LFNN model (2.43) in problem (2.48), we briefly review how the parameters are computed in experiments. In numerical experiments, the expectation in (2.48) is approximated by using a finite set of samples of the set \(\boldsymbol{Y}=\{Y^{(j)}:j=1,\cdots,N_{d}\}\), where \(N_{d}\) is the number of samples, and \(Y^{(j)}\) represents a time series sample of _joint_ asset return observations \(R_{i}(t),\ i\in\{1,\cdots,N_{a}\}\), observed at \(t\in\mathcal{T}\).13 Mathematically, problem (2.48) is approximated by Footnote 13: Note that the corresponding set of asset prices can be easily inferred from the set of asset returns, or vice versa. \[\inf_{\boldsymbol{\theta}\in\mathbb{R}^{N_{\boldsymbol{\theta}}}}\Bigg{\{} \frac{1}{N_{d}}\sum_{j=1}^{N_{d}}F\left(\mathcal{W}^{(j)}_{\boldsymbol{\theta }},\hat{\mathcal{W}}^{(j)}_{\hat{\mathcal{P}}}\right)\Bigg{\}}. \tag{2.51}\] Here \(\mathcal{W}^{(j)}_{\boldsymbol{\theta}}\) is the wealth trajectory of the active portfolio following the LFNN parameterized by \(\boldsymbol{\theta}\), and \(\hat{\mathcal{W}}^{(j)}\) is the wealth trajectory of the benchmark portfolio following the benchmark strategy \(\hat{\mathcal{P}}\), both evaluated on \(Y^{(j)}\), the \(j\)-th time series sample. We use a shallow neural network model, specifically, an LFNN model with one single hidden layer with 10 hidden nodes, i.e., \(K=1\) and \(N_{h}^{(1)}=10\). We use the 3-tuple vector \((t,W_{\boldsymbol{\theta}}(t),\hat{W}(t))^{\top}\) as the input (feature) to the LFNN network. At \(t\in[t_{0},T]\), \(W_{\boldsymbol{\theta}}(t)\) is the wealth of the active portfolio of the strategy that follows the LFNN model parameterized by \(\boldsymbol{\theta}\), and \(\hat{W}(t)\) is the wealth of the benchmark portfolio. Then, the optimal parameter \(\boldsymbol{\theta}^{*}\) can be numerically obtained by solving problem (2.51) using standard optimization algorithms such as ADAM (Kingma and Ba, 2014). This process is commonly referred to as "training" of the neural network model, and \(\boldsymbol{Y}\) is often referred to as the training data set (Goodfellow et al., 2016). Once \(\boldsymbol{\theta}^{*}\) is numerically obtained, the resulting optimal strategy \(f(\cdot;\boldsymbol{\theta}^{*})\) is evaluated on a separate "testing" data set \(\boldsymbol{Y}^{test}\), which contains a different set of samples generated from either the same distribution of the training process or a different process (depending on experiment purposes) so that the "out-of-sample" performance of \(f(\cdot;\boldsymbol{\theta}^{*})\) is assessed. ## 3 Numerical experiments In this section, we present a case study that explores optimal asset allocation during high-inflation periods using the LFNN model through numerical experiments. To conduct our analysis, we need data specifically from high inflation periods. Such data can be acquired using parametric modeling or non-parametric sample generation methods. It is important to note that our LFNN approach is agnostic to the choice of data modeling methods. While there is no universally accepted method for identifying or modeling high inflation regimes, for the purpose of this demonstration, we employ a simple filtering technique to identify inflation regime data and generate the required samples for training the LFNN. ### Filtering historical inflation regimes We use the U.S. CPI index and monthly data from the Center for Research in Security Prices (CRSP) over the 1926:1-2022:1 period.1415 We select high-inflation periods as determined by the CPI index using the following filtering procedure. Using a moving window of \(k\) months, we determine the cumulative CPI index log return (annualized) in this window. If the cumulative annualized CPI index log return is greater than a cutoff, then all the months in the window are flagged as part of a high-inflation regime. Note that some months may appear in more than one moving window. Any months which do not meet this criterion are considered to be in low-inflation regimes. See Algorithm D.1 in Appendix D.1 for the pseudo-code. Footnote 14: The date convention is that, for example, 1926:1 refers to January 1, 1926. Since the average annual inflation over the period 1926:1-2022:1 was 2.9%, and Federal Reserve policy-makers have been targeting the inflation rate of 2% over the long run to achieve maximum employment and price stability (The Federal Reserve, 2011), we use a cutoff of 5% as the threshold for high inflation. In addition, we use the moving window size of 5 years (see Appendix D.2 for more discussion). This uncovers two inflation regimes: 1940:8-1951:7 and 1968:9-1985:10, which correspond to well-known market shocks (i.e. the second world war, and price controls; the oil price shocks and stagflation of the seventies). Table 3.1 shows the average annual inflation over the two regimes identified from our filter. For possible investment assets, we consider the 30-day U.S. T-bill index (CRSP designation "t30ind"), a constant maturity 10-year U.S. treasury index,16 and the cap-weighted stock index (CapWt) and the equal-weighted stock index (EqWt), also from CRSP.17 All of these various indexes are adjusted for inflation by using the U.S. CPI index. Footnote 15: More specifically, results presented here were calculated based on data from Historical Indexes, ©2022 Center for Research in Security Prices (CRSP), The University of Chicago Booth School of Business. Wharton Research Data Services (WRDS) was used in preparing this article. This service and the data available thereon constitute valuable intellectual property and trade secrets of WRDS and/or its third-party suppliers. Footnote 16: The 10-year treasury index was generated from monthly returns from CRSP back to 1941 (CRSP designation “b10ind”). The data for 1926-1941 are interpolated from annual returns in Homer and Sylla (1996). The 10-year treasury index is constructed by (a) buying a 10-year treasury at the start of each month, (b) collecting interest during the month, and then (c) selling the treasury at the end of the month. We repeat the process at the start of the next month. The gains in the index then reflect both interest and capital gains and losses. Footnote 17: The capitalization-weighted total returns have the CRSP designation “wretd”, and the equal-weighted total returns have the CRSP designation “ewretd”. We find that the equal-weighted stock index has a higher average return and higher volatility than the cap-weighted stock index. In addition, we find that the 30-day T-bill index has a similar average return as the 10-year T-bond index, but much lower volatility, see Appendix D.3 for more details. This indicates that the T-bill index is the better choice of a defensive asset during high inflation. Subsequently, we consider the equal-weighted stock index, the cap-weighted stock index, and the 30-day T-bill index. ### Bootstrap resampling Once we have obtained the filtered historical high-inflation data series from Section 3.1, it becomes necessary to generate training and testing data sets from the original time series data. While one common approach is to assume and fit a parametric model to the underlying data, it is important to acknowledge the limitations associated with this choice. Parametric models have several drawbacks, including the difficulty of accurately estimating their parameters (Black, 1993). Even for a simple geometric Brownian motion (GBM) model, accurately estimating \begin{table} \begin{tabular}{c c} \hline \hline Time Period & Average Annualized Inflation \\ \hline 1940:8-1951:7 &.0564 \\ 1968:9-1985:10 &.0661 \\ \hline \hline \end{tabular} \end{table} Table 3.1: Inflation regimes determined using a five-year moving window with a cutoff inflation rate of 0.05. the drift rate can be challenging and prone to errors, requiring a long historical period of data coverage (Brigo et al., 2008). More complex models, such as the jump-diffusion model (2.15), introduce additional components to the stochastic model, which necessitates the estimation of extra parameters. Furthermore, parametric models inherently make assumptions about the true stochastic model for asset prices, which can be subject to debate. Acknowledging the above limitations of parametric market data models, we turn to the alternative nonparametric method of bootstrap resampling as a data-generating process for numerical experiments. Unlike parametric models, non-parametric models such as bootstrap resampling do not make assumptions about the parametric form of the asset price dynamics. Intuitively speaking, the bootstrap resampling method randomly chooses data points from the historical time series data and reassembles them into new paths of time series data. The bootstrap was initially proposed as a statistical method for estimating the sampling distribution of statistics (Efron, 1992). We use it as a data-generating procedure, as the philosophy behind bootstrap resampling is consistent with the idea that _"history does not repeat, but it rhymes."_. The bootstrap resampling provides an empirical distribution, which seems to be the least prejudiced estimate possible of the underlying distribution of the data-generating process. We also note that bootstrap resampling is widely adopted by practitioners (Alizadeh and Nomikos, 2007; Cogneau and Zakamouline, 2013; Dichtl et al., 2016; Scott and Cavaglia, 2017; Shahzad et al., 2019; Cavaglia et al., 2022; Simonian and Martirosyan, 2022) as well as academics (Anarkulova et al., 2022). Specifically, we choose to use the stationary block bootstrap resampling method (Politis and Romano, 1994). See Appendix E.1 for detailed pseudo-code for bootstrap resampling. Compared to the traditional bootstrap method, the block bootstrap technique preserves the local dependency of data within blocks. Furthermore, the stationary block bootstrap uses random blocksizes which preserves the stationarity of the original time series data. An important parameter is the expected blocksize, which, informally, is a measure of serial correlation in the return data. A challenge in using block bootstrap resampling is the need to choose a single blocksize for multiple underlying time series data so that the bootstrapped data entries for different assets are synchronized in time. Subsequently, we use the expected blocksize of 6 months for all time series data. However, we have compared different numerical experiments using a range of blocksizes, including i.i.d. assumptions (i.e. expected blocksize equal to one month), and find that the results are relatively insensitive to blocksize, as discussed in more detail in Appendix E.2. Typically, the bootstrap technique resamples from data sourced from one contiguous segment of historical periods. However, the moving-window filtering algorithm has identified two non-contiguous historical inflation regimes. To apply the bootstrap method, there are two intuitive possibilities: 1) concatenate the two historical inflation regimes first, then bootstrap from the concatenated combined series, or 2) bootstrap within each regime (i.e., using circular block bootstrap resampling within each regime), then combine the bootstrapped resampled data points. We have experimented with both methods and find that the difference is minimal (see Appendix E.3). In this article, we adopt the first method, i.e., we concatenate the historical regimes first, then bootstrap from the combined series. This method is also adopted by Anarkulova et al. (2022), where stock returns from different countries are concatenated and the bootstrap is applied to the combined data. ### A case study on high inflation investment: a 4-asset scenario #### 3.3.1 Experiment setup In this section, we conduct a case study on optimal asset allocation during a consistent high-inflation regime. The details of the investment specification are given in Table 3.2. Briefly, the active portfolio and benchmark portfolio begin with the same initial wealth of 100 at \(t_{0}=0\). Both portfolios are rebalanced monthly. The investment horizon is 10 years, and there is an annual cash injection of 10 for both portfolios, evenly divided over 12 months. We consider an empirical case in which we allow the manager to allocate between four investment assets: the equal-weighted stock index, the cap-weighted stock index, the 30-day U.S. T-bill index, and the 10-year U.S. T-bond index. We assume that the stock indexes and the 10-year T-bond index are long-only assets. The manager can short the T-bill index to take leverage and invest in the long-only assets (with maximum total leverage of 1.3). In this experiment, we assume the borrowing premium rate is zero. Essentially, we assume that the manager can borrow short-term funding to take leverage at the same cost as the treasury bill. This may be a reasonable assumption for sovereign wealth funds, as they are state-owned and enjoy a high credit rating. We remark that the borrowing premium does not really affect the results significantly.18 The annual outperformance target \(\beta\) is set to be 2% (i.e. 200 bps per year). Footnote 18: See Appendix I for a more detailed discussion. It is worth noting that we choose the benchmark portfolio to be a fixed-mix portfolio that maintains a 70% weight in the equal-weighted stock index and 30% in the 30-day U.S. T-bill index. We select this fixed-mix portfolio as the benchmark based on our observation that the equal-weighted stock index shows superior performance compared to the cap-weighted stock index during high-inflation environments. Surprisingly, when analyzing bootstrap resampled data from the historical inflation regimes, we find that the fixed-mix portfolio consisting of 70% in the equal-weighted stock index and 30% in the 30-day U.S. T-bill index partially stochastically dominates the fixed-mix portfolio consisting of 70% in the cap-weighted stock index and 30% in the 30-day U.S. T-bill index. For more detailed information, interested readers can refer to Appendix F. As discussed in the previous section, we use the stationary bootstrap resampling algorithm (see Appendix E.1) to generate a training data set \(\mathbf{Y}\) and a testing data set \(\mathbf{Y}^{test}\) (both with 10,000 resampled paths) from the concatenated index samples from two historical inflation regimes: 1940:8-1951:7 and 1968:9-1985:10, using an expected blocksize of 6 months. The testing data set \(\mathbf{Y}^{test}\) is generated using a different random seed as the training data set \(\mathbf{Y}\), and thus the probability of seeing the same sample in \(\mathbf{Y}\) and \(\mathbf{Y}^{test}\) is near zero (see Ni et al. (2022) for proof). We remark that in this experiment, we train the LFNN model (2.43) on \(\mathbf{Y}\) under the discrete-time CS objective (H.1), instead of the CD objective (2.12). As discussed in Section 2.3, the CS objective function only penalizes underperformance relative to the elevated target. Numerical comparisons of the two objective functions suggest that the CS objective function indeed yields more favorable investment results than the CD objective (see Appendix H). In this section, unless stated otherwise, all the results presented are testing results. ### Experiment results \begin{table} \begin{tabular}{l c c c c} \hline \hline Strategy & Median\([W_{T}]\) & E\([W_{T}]\) & std\([W_{T}]\) & 5th Percentile & Median IRR (annual) \\ \hline Neural network & 364.2 & 403.4 & 211.8 & 136.3 & 0.078 \\ Benchmark & 308.5 & 342.9 & 165.0 & 149.0 & 0.056 \\ \hline \hline \end{tabular} \end{table} Table 3.3: Statistics of strategies. Results are based on the evaluation results on the testing data set. \begin{table} \begin{tabular}{l c} \hline \hline Investment horizon \(T\) (years) & 10 \\ Equity market indexes & CRSP cap-weighted/equal-weighted index (real) \\ Bond index & CRSP 30-day/10-year U.S. treasury index (real) \\ Index samples for bootstrap & Concatenated 1940:8-1951:7, 1968:9-1985:10 \\ Initial portfolio wealth/annual cash injection & 100/10 \\ Rebalancing frequency & Monthly \\ Maximum leverage & 1.3 \\ Outperformance target rate \(\beta\) & 2\% \\ \hline \hline \end{tabular} \end{table} Table 3.2: Investment scenario. The analysis of Figure (a) reveals that the neural network strategy (the strategy following the training LFNN model) consistently outperforms the benchmark strategy in terms of the wealth ratio \(W(t)/\hat{W}(t)\). Over time, both the mean and median wealth ratios demonstrate a smooth and consistent increase. Regarding tail performance (20th percentile), the neural network strategy initially falls behind the benchmark but gradually recovers and ultimately achieves 10% greater wealth at the terminal time. This observation indicates that the neural network strategy effectively manages tail risk. An additional metric that holds significant interest for managers is the distribution of the terminal wealth ratio \(\frac{W(T)}{\hat{W}(T)}\). This metric examines the relative performance of the strategies at the end of the investment period. Figure (b) illustrates that there is a greater than 90% chance that the neural network strategy outperforms the benchmark strategy in terms of terminal wealth. This outcome is particularly noteworthy as the objective function (H.1) does not directly target the terminal wealth ratio. Given the constant cash injections in the portfolios, it is appropriate to employ the internal rate of return (IRR) as a measure of the portfolio's annualized performance. Figure (c)c demonstrates that the neural network strategy has a more than 90% chance of producing a higher IRR. Furthermore, the median IRR of the neural network strategy exceeds that of the benchmark strategy by slightly over 2%, aligning with the chosen target outperformance rate of \(\beta=0.02\). This indicates that the neural network model consistently achieves the desired target performance across most outcomes. The results from Table 3.3 indicate that the 5th percentile of the terminal wealth for the neural network strategy is lower than that of the benchmark strategy. This suggests that in some scenarios, particularly during persistent bear markets when stocks perform poorly, the neural network strategy may experience lower terminal wealth compared to the benchmark strategy. The neural network strategy takes on more risk by allocating a higher fraction of wealth to the equal-weighted stock index, which is considered a riskier asset, in comparison to the benchmark portfolio. It's important to note, however, that these scenarios occur with low probability. As depicted in Figure (b)b, the neural network strategy exhibits a significantly high probability of outperforming the benchmark in terms of terminal wealth, exceeding 90%. This implies that while there might be instances where the neural network strategy suffers relative to the benchmark, the overall performance is consistently strong, resulting in a high likelihood of achieving superior terminal value. To gain insight into the strong performance of the neural network strategy, we further examine its allocation profile. We begin by examining the mean allocation fraction for the four assets over time, as depicted in Figure 3.2. The first noteworthy observation from Figure 3.2 is that, on average, the neural network strategy does not allocate wealth to the cap-weighted stock index. Initially, this might appear surprising; however, it aligns Figure 3.1: Percentiles of wealth ratio \(\frac{W(t)}{\hat{W}(t)}\) and CDF of terminal wealth ratio \(\frac{W(T)}{\hat{W}(T)}\) and internal rate of return (IRR). Results are based on the evaluation of the learned neural network model on \(\boldsymbol{Y}^{test}\). with historical data indicating significantly higher real returns for the equal-weighted stock index during periods of high inflation (refer to Appendix D.3). Given that the objective is to outperform a benchmark heavily invested in the equal-weighted index (70%), it is logical to avoid allocating wealth to a comparatively weaker index in the active portfolio. The second observation derived from Figure 3.2 pertains to the evolution of mean bond allocation fractions. Initially, the neural network strategy shorts the 30-day T-bill index and assumes some leverage while heavily investing in the equal-weighted stock index during the first two years. This indicates a deliberate risk-taking approach early on to establish an advantage over the benchmark strategy. Subsequently, the allocation to the 10-year T-bond decreases, coinciding with the reduction in the allocation to the equal-weighted index. This suggests that the initial allocation to the T-bond was primarily for leveraging purposes, with the 10-year bond being the only defensive asset available. As leverage is no longer used in later years, the neural network strategy favors the T-bill over the 10-year bond. Overall, despite the gradual decrease in stock allocation over time, the neural network strategy maintains an average allocation of more than 80% to the equal-weighted stock index. This is expected, as outperforming an aggressive benchmark with a 70% allocation to the equal-weighted stock index necessitates assuming higher levels of risk. Despite the higher allocation to riskier assets, the neural network strategy consistently delivers strong results compared to the benchmark strategy, as illustrated in Figure 3.1. Lastly, it is worth noting that the neural network strategy, trained under high-inflation regimes, exhibits remarkable performance on low-inflation testing datasets. This unexpected outcome highlights the robustness of the strategy. For further discussion on this topic, interested readers can refer to Appendix J. ## 4 Conclusion In this paper, our primary objective is to propose a framework that generates optimal dynamic allocation strategies under leverage constraints in order to outperform a benchmark during high inflation regimes. Imposing leverage-constraint in multi-period asset allocation is consistent with the practice in large sovereign wealth funds, which often have exposures to alternative assets. Our proposed framework efficiently solves high-dimensional optimal control problems, accommodating diverse objective functions, constraints, and data sources. We begin by assuming that both asset prices follow jump-diffusion models. Under this assumption, we derive a closed-form solution for a two-asset case using the cumulative tracking difference (CD) objective function. However, to obtain this closed-form solution, we need to make additional unrealistic assumptions Figure 3.2: Mean allocation fraction over time, evaluated on \(\boldsymbol{Y}^{test}\) such as continuous rebalancing, unlimited leverage, and continued trading in insolvency. Despite these assumptions, the closed-form solution provides valuable insights into the optimal control behavior. Notably, to track the elevated target, the optimal control needs to aim higher than the target when making allocation decisions. To overcome the limitations of unrealistic assumptions and derive a more practical solution, we introduce a novel leverage-feasible neural network (LFNN) model. The LFNN model approximates the optimal control directly, eliminating the need for high-dimensional approximations of conditional expectations required in dynamic programming approaches. Additionally, the LFNN model converts the leverage-constrained optimization problem into an unconstrained optimization problem. Importantly, we justify the validity of the LFNN approach by mathematically proving that the solution to the parameterized unconstrained optimization problem can approximate the solution to the original constrained optimization problem with arbitrary precision. To illustrate the effectiveness of our proposed approach, we conduct a case study on optimal asset allocation during high-inflation regimes. We apply the LFNN model to bootstrap resampled data from filtered historical high-inflation data. In our numerical experiment, we consider an investment case with four assets in high inflation regimes. The results consistently demonstrate that the neural network strategy outperforms the benchmark strategy throughout the investment period. Specifically, the neural network strategy achieves a 2% higher median Internal Rate of Return (IRR) compared to the benchmark strategy and yields a higher terminal wealth with more than a 90% probability. The allocation strategy derived from the LFNN model suggests that managers should favor the equal-weighted stock index over the cap-weighted stock index and short-term bonds over long-term bonds during high-inflation periods. ## 5 Acknowledgements Forsyth's work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) grant RGPIN-2017-03760. Li's work was supported by he Natural Sciences and Engineering Research Council of Canada (NSERC) grant RGPIN-2020-04331. ## Appendix A Technical details of closed-form solution ### Proof of Theorem (2.17) At any state \((t,w,\hat{w})\in[t_{0},T]\times\mathbb{R}^{2}\), define the value function \(V(w,\hat{w},t)\) to the CD problem (2.11) as \[V(t,w,\hat{w},\hat{\boldsymbol{\varrho}}))=\inf_{\boldsymbol{p}}\Big{\{} \mathbb{E}_{\boldsymbol{p}}\Bigg{[}\int_{t}^{T}\big{(}W(s)-e^{\beta s}\hat{W} (s)\big{)}^{2}ds\Big{|}W(t)=w,\hat{W}(t)=\hat{w}\Big{]}\Bigg{\}}.\] (A.1) By the dynamic programming principle, we have \[V(t,w,\hat{w},\hat{\boldsymbol{\varrho}})=\inf_{\boldsymbol{p}}\Big{\{} \mathbb{E}_{\boldsymbol{p}}\Big{[}\Big{(}V(t+\Delta t,W(t+\Delta t),\hat{W}(t +\Delta t),\hat{\boldsymbol{\varrho}})+\int_{t}^{t+\Delta t}\big{(}W(s)-e^{ \beta s}\hat{W}(s)\big{)}^{2}ds\Big{)}\Big{|}W(t)=w,\hat{W}(t)=\hat{w}\Big{]} \Big{\}}.\] (A.2) Rearrange equation (A.2) to obtain \[\inf_{\boldsymbol{p}}\Big{\{}\mathbb{E}_{\boldsymbol{p}}\Big{[}\Big{(}dV(t,w,\hat{w},\hat{\boldsymbol{\varrho}})+\int_{t}^{t+\Delta t}\big{(}W(s)-e^{ \beta s}\hat{W}(s)\big{)}^{2}ds\Big{)}\Big{|}W(t)=w,\hat{W}(t)=\hat{w}\Big{]} \Big{\}}=0\] (A.3) Then, apply Ito's lemma with jumps (Cont et al., 2011), substitute \(dW\) and \(d\hat{W}\) terms with (2.16), and take limits as \(\Delta t\downarrow 0\), we obtain (2.17). The above results merely serve as an intuitive guide to obtain (2.17). The formal proof of (2.17) proceeds by using a suitably smooth test function, see for example (Oksendal and Sulem, 2007). ### Proof of results for CD-optimal control In Section 2.4, we emphasized the dependence of \(B\) and \(D\) (defined in (2.23) and (2.22)) on parameters \(\beta\) and \(c\) for understanding the optimal control function. As \(\beta\) and \(c\) are fixed parameters, in this proof, we omit the dependence of \(B\) and \(D\) on them for notational simplicity. The quadratic source term \(\left(w-e^{\beta t}\hat{w}\right)^{2}\) in Theorem (2.17) suggests the following _ansatz_ for the value function \(V\) in Theorem 2.17 of the form \[V(t,w,\hat{w})=A(t)w^{2}+B(t)w+C(t)+\hat{A}(t)\hat{w}^{2}+\hat{B}(t)\hat{w}+D(t )w\hat{w},\] (A.4) where \(A,B,C,\hat{A},\hat{B},D\) are unknown deterministic functions of time \(t\). If (A.4) is correct, then the pointwise infimum in (2.17) is attained by \(p^{*}\) satisfying the relationship \[\left(w\cdot\frac{\partial^{2}V}{\partial w^{2}}\right)\cdot p^{*}=-\frac{1}{ \gamma}\Bigg{(}\big{(}\mu_{1}-\mu_{2}\big{)}\cdot\frac{\partial V}{\partial w} +\big{(}\hat{\varrho}\gamma+\theta\big{)}\cdot\hat{w}\cdot\frac{\partial^{2} V}{\partial w\hat{w}}+\theta\cdot w\cdot\frac{\partial^{2}V}{\partial w^{2}} \Bigg{)},\] (A.5) assuming \(A(t)>0\). Here \(\gamma\) and \(\theta\) are defined in (2.19). (A.4) implies that the relevant partial derivatives of \(V\) are of the form \[\frac{\partial^{2}V}{\partial w^{2}}=2A(t),\quad\frac{\partial V}{\partial w} =2A(t)w+B(t)+D(t)\hat{w},\quad\frac{\partial^{2}V}{\partial w\hat{w}}=D(t).\] (A.6) Substituting (A.6) into (A.5), the optimal control \(p^{*}\) obtained is in the form of (2.20), where \(h\) and \(g\) are given by (2.21). Then, it only remains to determine the functions \(A,B,D\). Substituting (A.5) into PIDE (2.17), we can obtain the following ordinary differential equations (ODE) for \(A,B,D\), \[\left\{\begin{array}{l}\frac{dA(t)}{dt}=-\Big{(}2\mu_{2}-\eta \Big{)}A(t)-1,\qquad A(T)=0,\\ \frac{dD(t)}{dt}=-\Big{(}2\mu_{2}-\eta\Big{)}D(t)+2e^{\beta t},\qquad D(T)=0,\\ \frac{dB(t)}{dt}=-(\mu_{2}-\phi)B(t)-2cA(t)-cD(t),\qquad B(T)=0,\end{array}\right.\] (A.7) Solving the ODE system gives us the \(A,B,D\) defined in (2.22) and (2.23). We also note that \(A(t)>0\), thus completing the proof. ### Proof of Corollary (2.1) and (2.2) van Staden et al. (2022) derive the CD-optimal control under the assumption that the stock price follows the double-exponential jump-diffusion model and the bond is risk-free with the bond price \(B(t)\) following \[\frac{dB(t)}{B(t)}=r.\] (A.8) Under such, assumptions, van Staden et al. (2022) shows that the CD-optimal control can be expressed in a similar form as in (2.20) with \(g\) and \(h\) functions. The \(g\) and \(h\) functions satisfy the same properties as in Corollary (2.1) and (2.2). Despite the fact that we assume the bond price follows a jump-diffusion model, the proof of Corollary (2.1) and (2.2) follows similar steps as the proof in van Staden et al. (2022). Technical details of LFNN model ### Proof of Theorem 2.2 **Theorem 2.2**.: (Unconstrained feasibility domain) The feasibility domain \(\mathcal{Z}_{\mathbf{\theta}}\) defined in (2.41) associated with the LFNN model (2.43) is \(\mathbb{R}^{N_{\mathbf{\theta}}}\). Proof.: First, it is obvious that \(\mathcal{Z}_{\mathbf{\theta}}\subseteq\mathbb{R}^{N_{\mathbf{\theta}}}\) by definition of (2.41). Next, we show that \(\mathbb{R}^{N_{\mathbf{\theta}}}\subseteq\mathcal{Z}_{\mathbf{\theta}}\). To prove this, we need to show that for any \(\mathbf{\theta}\in\mathbb{R}^{N_{\mathbf{\theta}}}\), \[f(x;\mathbf{\theta})=p\in\left\{\begin{aligned} & \mathcal{Z}_{1},\;\text{if}\;x\in\mathcal{X}_{1},\\ &\mathcal{Z}_{2},\;\text{if}\;x\in\mathcal{X}_{2},\end{aligned} \right.\quad\forall x\in\mathcal{X}.\] (B.1) Here \(f\) is the LFNN function defined in (2.43), \(p=(p_{1},\cdots,p_{N_{a}})^{\top}\in\mathbb{R}^{N_{a}}\) is the output of the LFNN model that represents the wealth allocation to the assets, \(\mathcal{Z}\) is the feasibility domain defined in (2.37), and \(x=\big{(}t,W(t),\dot{W}(t)\big{)}^{\top}\in\mathcal{X}\) is a feature vector. To prove (B.1), we verify the two scenarios (\(x\in\mathcal{X}_{1}\) and \(x\in\mathcal{X}_{2}\)) separately. When \(x\in\mathcal{X}_{2}\), it is easily verifiable that \(p=\mathbf{e}_{N_{t}+1}\) via the definition of the leverage-feasible activation function (2.44). Next, we verify that when \(x\in\mathcal{X}_{1}\), \(p\in\mathcal{Z}_{1}\). To prove this, we need to show that constraints of (2.29)-(2.32) are satisfied when \(x\in\mathcal{X}_{1}\). By definition of (2.44), it is obvious that the long-only constraint (2.29) holds for long-only assets. It is also easy to verify that the summation constraint (2.30) is satisfied. This can be observed after the fact that \[\sum_{i=1}^{N_{l}}p_{i}=l,\quad\text{and}\quad\sum_{i=N_{l}+1}^{N_{a}}p_{i}=1-l.\] (B.2) The maximum leverage constraint (2.31) is also satisfied, as \[\sum_{i=1}^{N_{l}}p_{i}=l=p_{max}\cdot\text{Sigmoid}(-o_{N_{a}+1})\leq p_{max}.\] (B.3) Finally, the simultaneous shorting constraint (2.5) is satisfied. To see this, we examine the scenario when leverage occurs, i.e., \(\sum_{i=1}^{N_{l}}p_{i}=l>1\). Then, by definition from (2.44), we know \[p_{i}=(1-l)\cdot\frac{e^{o_{i}}}{\sum_{k=N+1}^{N_{a}}e^{o_{k}}}\leq 0,\;\forall i \in\{N_{l}+1,\cdots,N_{a}\}\] (B.4) From (B.4) it is clear that if \(l\leq 1\), then \(p_{i}\geq 0,\forall i\). Therefore, for any \(\mathbf{\theta}\in\mathbb{R}^{N_{\mathbf{\theta}}}\), (B.1) is satisfied. This implies \(\mathbb{R}^{N_{\mathbf{\theta}}}\subseteq\mathcal{Z}_{\mathbf{\theta}}\). ### Proof of Lemma 2.2 and Theorem 2.3 **Lemma 2.2**.: (Structure of feasible control) Any feasible control function \(p:\mathcal{X}\mapsto\mathcal{Z}\), where \(\mathcal{Z}\) is defined in (2.38), has the function decomposition \[p(x)=\varphi(\omega(x),x),\] (B.5) where \(\varphi:\tilde{\mathcal{Z}}\times\mathcal{X}\mapsto\mathcal{Z}\) is defined in (2.46), i.e. \[\varphi(z)=\Big{(}z_{N_{a}+1}\cdot(z_{1},\cdots,z_{N_{l}}),(1-z_{N_{a}+1}) \cdot(z_{N_{l}+1},\cdots,z_{N_{a}})\Big{)}^{\top}\cdot\mathbf{1}_{\mathbf{x}\in \mathcal{X}_{1}}+\mathbf{e}_{N_{l}+1}\cdot\mathbf{1}_{\mathbf{x}\in\mathcal{X}_{2}},\] (B.6) and \(\omega:\mathcal{X}\mapsto\tilde{\mathcal{Z}}\). Here \[\tilde{\mathcal{Z}}=\bigg{\{}z\in\mathbb{R}^{N_{a}+1},\sum_{i=1}^{N_{l}}z_{i}=1, \sum_{i=N_{l}+1}^{N_{a}}z_{i}=1,z_{N_{a}+1}\leq p_{max},z_{i}\geq 0,\forall i \bigg{\}}.\] (B.7) Proof.: We prove the lemma by existence. Define \(\omega\) as \[\omega(x)=\bigg{\{}\phi\big{(}p(x)\big{)},\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\text{if }x\in\mathcal{X}_{1},\] (B.8) where for \(\forall z=(z_{1},\cdots,z_{N_{a}})^{\top}\in\mathcal{Z}_{1}\), \(y=\phi(z){\in\mathbb{R}^{N_{a}+1}}\) is as defined below \[\phi(z)\equiv y=\left\{\begin{array}{ll}y_{i}=\frac{z_{i}}{\sum_{j=1}^{N_{l }}z_{j}},\;i\in\{1,\cdots,N_{l}\},\\ y_{i}=\frac{z_{i}}{1-\sum_{j=1}^{N_{l}}z_{j}},\;i\in\{N_{l},\cdots,N_{a}\}, \qquad\text{if }\sum_{i=1}^{N_{l}}z_{i}\in(0,1)\cup(1,p_{max}],\\ y_{N_{a}+1}=\sum_{j=1}^{N_{l}}z_{j},\\ y_{i}=z_{i},\;i\in\{1,\cdots,N_{l}\},\\ y_{i}=1/(N_{a}-N_{l}),\;i\in\{N_{l},\cdots,N_{a}\},\quad\text{if }\sum_{i=1}^{N_{l}}z_{i}=1,\\ y_{N_{a}+1}=1,\\ \left\{\begin{array}{ll}y_{i}=0,\;i\in\{1,\cdots,N_{l}\},\\ y_{i}=z_{i},\;i\in\{N_{l},\cdots,N_{a}\},&\text{if }\sum_{i=1}^{N_{l}}z_{i}=0,\\ y_{N_{a}+1}=0,\end{array}\right.\end{array}\] (B.9) It can then be easily verified that \(\omega:\mathcal{X}\mapsto\tilde{\mathcal{Z}}\), and that \(p(x)=\varphi(\omega(x),x)\). **Lemma B.1**.: (Approximation of controls with a specific structure) _Assume a control function \(p:\mathcal{X}\mapsto\mathcal{Z}\) has the structure_ \[p(x)=\Phi(\Omega(x),x),x\in\mathcal{X},\] (B.10) _where \(\mathcal{X}\) is compact, \(\Omega\in C(\mathcal{X},\mathcal{Y})\), i.e. \(\Omega\) is a continuous mapping from \(\mathcal{X}\) to \(\mathcal{Y}\), and \(\Phi:\mathcal{Y}\times\mathcal{X}\mapsto\mathcal{Z}\) is Lipschitz continuous on \(\mathcal{Y}\times\mathcal{X}_{i}\), \(\forall i=1,\cdots,n\), where \(\{\mathcal{X}_{i},i=1,\cdots,n\}\) is a partition of \(\mathcal{X}\), i.e._ \[\left\{\begin{array}{ll}\bigcup_{i=1}^{n}\mathcal{X}_{i}=\mathcal{X},\\ \mathcal{X}_{i}\bigcap\mathcal{X}_{j}=\varnothing,\forall 1\leq i,j\leq n. \end{array}\right.\] (B.11) _If \(\exists m\in\mathbb{N}\) and \(\Upsilon:\mathbb{R}^{m}\mapsto\mathcal{Y}\) such that_ 1. \(\Upsilon\) _has a continuous right inverse on_ \(Im(\Upsilon)\)_._ 2. \(Im(\Upsilon)\) _is dense in_ \(\mathcal{Y}\)_, then_ \(\forall\epsilon>0\)_._ 3. \(\partial Im(\Upsilon)\) _is collared._ _Then there exists a choice of \(N_{h}\) and \(\boldsymbol{\theta}\) such that the fully connected feedforward neural network function \(\tilde{f}(\cdot;\boldsymbol{\theta})\) defined in (2.42) satisfies_ \[\sup_{x\in\mathcal{X}}\|\Phi\Big{(}\Upsilon\big{(}\tilde{f}(x;\boldsymbol{ \theta})\big{)},x\Big{)}-p(x)\|<\epsilon.\] (B.12) Proof.: Let \[L_{\Phi}=\max_{1\leq i\leq n}L_{i},\] (B.13) where \(L_{i}\) is the Lipschitz constant for \(\Phi\) on \(\mathcal{Y}\times\mathcal{X}_{i}\). Since \(\Omega\in C(\mathcal{X})\) in compact \(\mathcal{X}\), following Kratsios and Bilokopytov (2020), we know that \(\forall\epsilon_{*}\), there exists \(N_{h}\in\mathbb{N}\) and \(\mathbf{\theta}\in\mathbb{R}^{N_{\mathbf{\theta}}}\) such that the corresponding FNN \(\tilde{f}(\cdot;\mathbf{\theta}):\mathcal{X}\mapsto\mathbb{R}^{m}\) defined in (2.42) satisfies \[\sup_{x\in\mathcal{X}}\|\Upsilon\big{(}\tilde{f}(x;\mathbf{\theta})\big{)}-\Omega(x )\|<\epsilon/L_{\Phi},\] (B.14) Then \[\sup_{x\in\mathcal{X}}\|\Phi\Big{(}\Upsilon\big{(}\tilde{f}(x; \mathbf{\theta})\big{)},x\Big{)}-p(x)\| =\sup_{1\leq i\leq n}\sup_{x\in\mathcal{X}_{i}}\|\Phi\Big{(} \Upsilon\big{(}\tilde{f}(x;\mathbf{\theta})\big{)},x\Big{)}-\Phi\Big{(}\Omega(x),x \Big{)}\|\] (B.15) \[\leq\sup_{1\leq i\leq n}\sup_{x\in\mathcal{X}_{i}}L_{i}\cdot\Big{(} \|\Upsilon\big{(}\tilde{f}(x;\mathbf{\theta})\big{)}-\Omega(x)\|\Big{)}\] (B.16) \[<\sup_{1\leq i\leq n}\frac{L_{i}}{L_{\Phi}}\epsilon\] (B.17) \[\leq\epsilon.\] (B.18) **Remark B.1**.: (Remark on Lemma B.1) Normally, the universal approximation theorem only applies to the approximation of continuous functions defined on a compact set (Hornik, 1991). Lemma B.1 extends the universal approximation theorem to a broader class of functions that have the structure of (B.10). Furthermore, Lemma B.1 provides guidance on constructing neural network functions that handle stochastic constraints on controls which are usually difficult to address in stochastic optimal control problems. Consider the following example: the control \(p:\mathcal{X}\mapsto\mathbb{R}^{N_{a}}\) has stochastic constraints such that \(p(\mathbf{x})\in[a(\mathbf{x}),b(\mathbf{x})]\) where \(a,b:\mathcal{X}\mapsto\mathbb{R}^{N_{a}}\) are deterministic functions. This is a common setting in portfolio optimization problems in which allocation fractions to specific assets are subject to thresholds tied to the performance of the portfolio. With Lemma B.1, with a bit of engineering, one can easily construct a \(\Phi\) so that the corresponding neural network satisfies the constraints naturally and be guaranteed that such a neural network can approximate the control well. We then proceed to prove Theorem 2.3. **Theorem 2.3**.: (Approximation of optimal control) Following Assumption 2.7, \(\forall\epsilon>0\), there exists \(N_{h}\in\mathbb{N}\), and \(\mathbf{\theta}\in\mathbb{R}^{N_{\mathbf{\theta}}}\) such that the corresponding LFNN model \(f(\cdot;\mathbf{\theta})\) described in (2.43) satisfies the following: \[\sup_{x\in\mathcal{X}}\|f(x;\mathbf{\theta})-p^{*}(x)\|<\epsilon.\] (B.19) Proof.: From (2.43) and Lemma 2.1, we know that \[f(x;\mathbf{\theta})=\psi\big{(}\tilde{f}(x;\mathbf{\theta}),x\big{)}=\varphi\Big{(} \zeta\big{(}\tilde{f}(x;\mathbf{\theta})\big{)},x\Big{)},\] (B.20) where \(\tilde{f}\) is the FNN defined in (2.42) and \(\varphi:\tilde{\mathcal{Z}}\times\mathcal{X}\mapsto\mathbb{R}^{N_{a}},\zeta: \mathbb{R}^{N_{a}+1}\mapsto\tilde{\mathcal{Z}}\) are defined in (2.46). It can be easily verified that \(\zeta\) satisfies the following: 1. \(\zeta\) has a continuous right inverse, e.g. \[\zeta^{-1}(z):Im(\zeta)\mapsto\mathbb{R}^{N_{a}+1},\zeta^{-1}(z)=\Bigg{(}\log (z_{1}),\cdots,\log(z_{N_{a}}),\sigma^{-1}(z_{N_{a}+1}/p_{max})\Bigg{)}^{\top},\] (B.21) where \(\sigma^{-1}\) is the inverse function of the sigmoid function. 2. \(Im(\zeta)\) is dense in \(\tilde{\mathcal{Z}}\). This is because \(\overline{Im(\zeta)}\), the closure of \(Im(\zeta)\), is \(\tilde{\mathcal{Z}}\). 3. \(\partial Im(\zeta)\) is collared (Brown, 1962; Connelly, 1971; Baillif, 2022). Furthermore, consider the partition of \(\mathcal{X}\), \(\big{\{}\mathcal{X}_{1},\mathcal{X}_{2}\big{\}}\), which is defined in Definition 2.1. It is easily verifiable that \(\varphi\) is Lipschitz continuous on \(\tilde{\mathcal{Z}}\times\mathcal{X}_{1}\) and \(\tilde{\mathcal{Z}}\times\mathcal{X}_{2}\) respectively. Finally, according to Assumption 2.7, \(p^{*}(x)=\varphi\big{(}\omega^{*}(x),x\big{)}\), where \(\omega^{*}\in C(\mathcal{X},\tilde{\mathcal{Z}})\). Applying Lemma B.1 with \(\mathcal{Y}=\tilde{\mathcal{Z}},\Omega(\cdot)=\omega^{*}(\cdot)\), \(\Upsilon(\cdot)=\zeta(\cdot)\), and \(\Phi(\cdot,\cdot)=\varphi(\cdot,\cdot)\), we know that there exists \(N_{h}\in\mathbb{N}\), and \(\boldsymbol{\theta}\in\mathbb{R}^{N_{\boldsymbol{\theta}}}\) such that the corresponding LFNN model \(f(x;\boldsymbol{\theta})=\varphi\Big{(}\zeta\big{(}\tilde{f}(x;\boldsymbol{ \theta})\big{)},x\Big{)}\) satisfies the following: \[\sup_{x\in\mathcal{X}}\|f(x;\boldsymbol{\theta})-p^{*}(x)\|<\epsilon.\] (B.22) ## Appendix C Comparing LFNN with closed-form solution In this section, we compare the performance of the strategy following the learned shallow LFNN model (which we refer to as the "neural network strategy" from now on) with the closed-form solution (2.20), and provide empirical validation of the LFNN approach. ### Approximate form under realistic assumptions We first note that the closed-form solution \(p^{*}\) defined in (2.20) is obtained under several unrealistic assumptions, namely continuous rebalancing, unlimited leverage, and continuing trading in insolvency.19 In practice, investors have constraints such as discrete rebalancing, limited leverage, and no trading when insolvent. For a meaningful comparison, instead of comparing the neural network strategy with the closed-form solution \(p^{*}\) directly, we compare the neural network strategy with an easily obtainable approximation to the closed-form solution which satisfies realistic constraints. Footnote 19: Note that we consider a two-asset scenario here, thus the scalar \(p^{*}\in\mathbb{R}\) (allocation fraction for the stock index) fully describes the allocation strategy \(\boldsymbol{p}^{*}\), since \(\boldsymbol{p}^{*}=(p^{*},1-p^{*})^{\top}\). In particular, we consider an equally-spaced discrete rebalancing schedule \(\mathcal{T}_{\Delta t}\) defined as \[\mathcal{T}_{\Delta t}=\Big{\{}t_{i}:\;i=0,\cdots,N\Big{\}},\] (C.1) where \(t_{i}=i\Delta t\), and \(\Delta t=T/N\). Then, the _clipped form_\(\bar{p}_{\Delta t}:\mathcal{T}_{\Delta t}\times\mathbb{R}^{3}\mapsto \mathbb{R}\) is defined as \[\text{(Clipped form)}:\quad\bar{p}_{\Delta t}(t_{i},\bar{W}_{\Delta t}(t_{i} ),\hat{W}_{\Delta t}(t_{i}),\hat{\varrho})=\min\Bigg{(}\max\Big{(}p^{*}(t_{i},\bar{W}_{\Delta t}(t_{i}),\hat{W}_{\Delta t}(t_{i}),\hat{\varrho}),p_{min} \Big{)},p_{max}\Bigg{)}.\] (C.2) Here \([p_{min},p_{max}]\), where \(p_{min}=0\) and \(p_{max}\geq 1\), is the allowed range, \(\bar{W}_{\Delta t}(t_{i})\) is the wealth of the active portfolio at \(t_{i}\) following \(\bar{p}_{\Delta t}\) from \(t_{0}\) to \(t_{i}\), \(\hat{W}_{\Delta t}(t_{i})\) is the wealth of the benchmark portfolio at \(t_{i}\) following the fixed-mix strategy described by constant allocation fraction \(\hat{\varrho}\), but only rebalanced discretely according to \(\mathcal{T}_{\Delta t}\). Clearly, the allocation strategy from \(\bar{p}_{\Delta t}\) follows the discrete schedule of \(\mathcal{T}_{\Delta t}\), and satisfies the leverage constraint that \(\bar{p}_{\Delta t}\in[p_{min},p_{max}]\). \(p_{\Delta t}\) approaches the closed-form solution \(p^{*}\) as \(\Delta t\downarrow 0,p_{min}\downarrow-\infty\) and \(p_{max}\uparrow\infty\). We note that a similar clipping idea is explored in Vigna (2014) in the context of closed-form solutions for multi-period mean-variance asset allocation. However, it should be emphasized that the clipped form \(\bar{p}_{\Delta t}\) with finite \((p_{min},p_{max})\) is a feasible, but in general sub-optimal, control of the leverage-constrained CD problem (2.12). We then address the assumption that trading continues when insolvent, i.e., when the wealth of the portfolio reaches zero. While necessary for the mathematical derivation of the closed-form solution, we acknowledge that this is by no means reasonable for practitioners. Under the continuous rebalancing case (no jumps), if the control (allocation) is bounded, it is shown that the wealth of the portfolio can never be negative (Wang and Forsyth, 2012). However, with discrete rebalancing, even with a bounded control, as long as the upper bound \(p_{max}>1\), it is theoretically possible that the portfolio value becomes negative. We address this assumption by applying an overlay on strategies so that in the case of insolvency, we assume the manager liquidates the long-only positions and allocates the debt (negative wealth) to a shortable (bond) asset (consistent with Assumption 2.6) to allow outstanding debt to accumulate until the end of the investment horizon. Going forward, when we refer to any strategy (e.g. neural network strategy, clipped form), we mean the strategy with this overlay applied. We remark that in practice, this overlay has little effect. In numerical experiments with 10,000 samples of observed wealth trajectories (based on calibrated jump-diffusion model or bootstrap resampled data paths), we do not observe any single wealth trajectory that ever hits negative wealth for any strategy (e.g., neural network strategy, clipped form, etc). In summary, the clipped form satisfies the realistic constraints and is a comparable benchmark for the neural network strategy. In the following section, we will numerically compare the performance of the clipped form, the neural network strategy, and the closed-form solution. ### Comparison: LFNN strategy vs clipped-form solution We assess and compare the performance of the neural network strategy and the clipped form, we assume the following investment scenario described in Table C.1. We assume the stock index and the bond index prices follow a double exponential jump model (2.15), see e.g., (Kou, 2002; Kou and Wang, 2004), i.e., for the jump variable \(\xi_{i}\), \(y_{i}=\log(\xi_{i})\) follows the double exponential distribution with density functions \(g_{i}(y_{i})\) defined as follows \[g_{i}(y_{i})=\nu_{i}\iota_{i}e^{-\iota_{i}y_{i}}\mathbf{1}_{y_{i}\geq 0}+(1- \nu_{i})\varsigma_{i}e^{-\varsigma_{i}y_{i}}\mathbf{1}_{y_{i}<0},\;i=1,2.\] (C.3) where \(\nu_{i}\) is the probability for an upward jump, and \(\iota_{i}\) and \(\varsigma_{i}\) are parameters that describe the upward jump and downward jump respectively. The double exponential jump-diffusion model allows the flexibility of modeling asymmetric upward and downward jumps in asset prices, which seems an appropriate assumption for inflation regimes.20 Footnote 20: We remind the reader that the closed-form solution is derived under the jump-diffusion model. Using the threshold technique (Mancini, 2009; Cont et al., 2011; Dang and Forsyth, 2016), we calibrate the double exponential jump-diffusion models to the historical high-inflation periods described in Section 3.1. The calibrated parameters can be found in Appendix G. Then, we construct a training data set \(\mathbf{Y}\) and a testing data set \(\mathbf{Y}^{test}\) by sampling the calibrated model, each with 10,000 samples. The neural network strategy follows the LFNN model learned from \(\mathbf{Y}\). We then evaluate the performance of the neural network strategy and the approximate form (C.2) on the testing data set \(\mathbf{Y}^{test}\). Specifically, we compare the value of the CD objective function (2.12) for the neural network strategy and the clipped form on \(\mathbf{Y}^{test}\). In particular, this training/testing process is repeated for various rebalancing frequencies from monthly to annually, as described in Table C.1. In Table C.2, we can see that the neural network strategy consistently outperforms the clipped form in terms of the objective function value for all rebalancing frequencies. From Table 4.2 we can see that the \begin{table} \begin{tabular}{l c} \hline \hline Investment horizon \(T\) (years) & 10 \\ Assets & CRSP cap-weighted index (real)/30-day T-bill (U.S.) (real) \\ Index Samples & Concatenated 1940:8-1951:7, 1968:9-1985:10 \\ Initial portfolio wealth/annual cash injection & 100/10 \\ Rebalancing frequency & Monthly, quarterly, semi-annually, annually \\ Maximum leverage & 1.3 \\ Benchmark equity percentage & 0.7 \\ Outperformance target rate \(\beta\) & 1\% (100 bps) \\ \hline \hline \end{tabular} \end{table} Table C.1: Investment scenario. objective function values of both the neural network strategy and the clipped form converge at roughly a first-order rate as \(\Delta t\downarrow 0\). Assuming this to be true, we extrapolate the solution to \(\Delta t=0\) using Richardson extrapolation. These extrapolated values are estimates of the exact value of the continuous-time CD objective function (2.11) for the clipped form and the neural network strategy. We can see that the neural network strategy still outperforms the clipped form in terms of the extrapolated objective function value. We can also see that the extrapolated neural network objective function value is lower than the (suboptimal) clipped form extrapolated value, but, of course, larger than the unconstrained closed-form solution. Finally, we compare the neural network allocation strategy with the clipped form strategy. Specifically, in Figure C.1, we consider the case of monthly rebalancing and present the scatter plots of the allocation fraction in the stock index with respect to time \(t\) and the ratio between the wealth of the active portfolio \(W(t)\) and the elevated target \(e^{\beta t}\hat{W}(t)\). For simplicity, we call this ratio the "tracking ratio". We plot the 3-tuple \(\left(\frac{W(t)}{e^{\beta t}\hat{W}(t)},t,p_{1}(W(t),\hat{W}(t),t)\right)\) (obtained from the evaluation of the strategies on samples from \(\mathbf{Y}^{test}\)) by using time \(t\) as the x-axis, the tracking ratio \(\frac{W(t)}{e^{\beta t}\hat{W}(t)}\) as the y-axis, and the values of the corresponding allocation fraction to the cap-weighted index \(p_{1}(W(t),\hat{W}(t),t)\) to color the scattered dots on the plot. A darker shade of the color indicates a higher allocation fraction. As we can see from Figure C.1, the stock allocation fraction of the neural network strategy behaves similarly to the stock allocation fraction from the clipped form. Both strategies invest more wealth in the stock when the tracking ratio is lower, which is consistent with the insights we obtained in Section 2.4.1. In addition, the transition patterns of the allocation fraction of the two strategies are also highly similar. One can almost draw an imaginary horizontal dividing line around \(\frac{W(t)}{e^{\beta t}\hat{W}(t)}=0.9\) that separates high stock allocation and low stock allocation for both strategies. \begin{table} \begin{tabular}{l c c c c c} \hline \multicolumn{5}{c}{Closed-form solution objective function value: 418} \\ \hline Strategy & \(\Delta t=1\) & \(\Delta t=1/2\) & \(\Delta t=1/4\) & \(\Delta t=1/12\) & \(\Delta t=0\) \\ \hline Clipped form & 545 & 504 & 479 & 467 & 461 (extrapolated) \\ \hline Neural network & 537 & 498 & 476 & 464 & 458 (extrapolated) \\ \hline \end{tabular} \end{table} Table C.2: CD objective function values. Results shown are evaluated on \(\mathbf{Y}^{test}\), the lower the better. We remark that a common criticism towards the use of neural networks is about the lack of interpretability compared to more interpretable counterparts such as the regression models (Rudin, 2019). In this section, we see that the neural network strategy closely resembles the closed-form solution for the CD objective. The closed-form solution, in turn, complements the neural network model and offers an alternative way of interpreting results obtained from the neural network. ## Appendix D Moving-window inflation filter ### Filtering algorithm Algorithm D.1 presents the pseudocode for the moving-window filtering algorithm. ``` Data: \(\text{CPI}[i]\); \(i=1,\ldots,N\) /* \(\text{CPI}\) Index */ \(\text{Cutoff}\) /* \(\text{High inflation cutoff}\): annualized */ \(\Delta t\) /* \(\text{CPI}\) index time interval */ \(K\) /* smoothing window size Result: \(\text{Flag}[i]\); \(i=1,\ldots,N\) /* = 1 high-inflation month; = 0 otherwise */ /* \(\text{Finalization}\) */ \(\text{Flag}[i]\) =0; \(i=1,\ldots,N\); for(\(i=1,\ldots,N-K\)) { if\(\log(CPI[i+K]/CPI[i])/(K*\Delta t)>\)Cutoffthen for(\(j=0,\ldots,K\)) { Flag[i+j] = 1 ; } } end for } ``` **Algorithm D.1**Pseudocode window inflation filter ### Effect of moving window size Figure D.1 shows the filtering results for windows of size 12, 60, and 120 months. We can see that the five-year window produces two obvious inflation regimes: 1940:8-1951:7 and 1968:9-1985:10, which correspond to well-known market shocks (i.e. the second world war, and price controls; the oil price shocks and stagflation of the seventies). Increasing the window size to 10 years results in similar-looking plots as the five-year window size, but the number of months in each window increases, and the average inflation rate is lower. Since our objective is to determine the effect of high-inflation periods on allocation strategies, we choose the five-year window size. ### Asset performance during high inflation To gain some intuition on the behavior of asset returns during the inflation periods, we assume that each real (adjusted by CPI index) index follows geometric Brownian motion (GBM). For example, given an index with value \(S\), then \[dS = \mu S\ dt+\sigma S\ dZ\] (D.1) where \(dZ\) is the increment of a Wiener process. We use maximum likelihood estimation to fit the drift rate \(\mu\) (expected arithmetic return) and volatility \(\sigma\) in each regime, for each index, as shown in Table D.1. We also show a series constructed by: converting the indexes in each regime to returns, concatenating the two return series, and converting the concatenated return series back to an index. This concatenated index does not, of course, correspond to an actual historical index, but is a pseudo-index constructed from high-inflation regimes. This amounts to a worst-case sequence of returns in terms of the duration of historical inflation periods, that could plausibly be expected during a long period of high inflation. It is striking that in each historical inflation regime (i.e., 1940:8-1951:7 and 1968:9-1985:10) in Table D.1, the drift rate \(\mu\) for the equal-weighted index is much larger than the drift rate for the cap-weighted index. We can observe that the mean geometric return for the cap-weighted index, in the period 1968:9-1985:10, was only about one percent per year. It is also noticeable that bonds performed very poorly in the period 1940:8-1951:7. As well, during the period 1968:9-1985:10, there was essentially no term premium for 10-year treasuries, compared with 30-day T-bills. In addition, the 10-year treasury index had much higher volatility compared to the 30-day T-bill index. Looking at the concatenated series, it appears that 30-day T-bills are arguably the better defensive \begin{table} \begin{tabular}{l r r r} \hline Index & \(\mu\) & \(\sigma\) & \(\mu-\sigma^{2}/2\) \\ \hline \multicolumn{4}{c}{1940:8-1951:7} \\ \hline CapWt & 0.079 & 0.140 &.069 \\ EqWt & 0.145 & 0.190 &.127 \\ 10 Year Treasury & -0.035 & 0.036 & -.036 \\ 30-day T-bill & -0.050 & 0.029 & -.050 \\ \hline \multicolumn{4}{c}{1968:9-1985:10} \\ \hline CapWt & 0.026 & 0.164 &.013 \\ EqWt & 0.065 & 0.220 &.041 \\ 10 Year Treasury & 0.011 & 0.093 &.007 \\ 30-day T-bill & 0.009 & 0.012 &.009 \\ \hline \multicolumn{4}{c}{Concatenated: 1940:8-1951:7 and 1968:9 - 1985:10} \\ \hline CapWt & 0.049 & 0.156 &.038 \\ EqWt & 0.098 & 0.209 &.076 \\ 10 Year Treasury & -0.008 & 0.076 & -.011 \\ 30-day T-bill & -0.014 & 0.022 & -.014 \\ \hline \end{tabular} \end{table} Table D.1: GBM parameters for the indexes shown. All indexes are real (deflated). \(\mu\) is the expected annualized arithmetic return. \(\sigma\) is the annualized volatility. (\(\mu-\sigma^{2}/2\)) is the annualized mean geometric return. Figure D.1: high-inflation regimes, using the moving-window method, with the window size shown. The cutoff for _high-inflation_ regimes was 0.05. High-inflation months have a label value of one, and low-inflation months have a label value of zero. CPI data identified from the historical period 1926:1-2022:1. asset here since the volatility of this index is quite low (but with a negative (real) drift rate). ## Appendix E Bootstrap resampling ### Stationary block bootstrap algorithm Algorithm E.1 presents the pseudocode for the stationary block bootstrap. See Ni et al. (2022) for more discussion. ``` /* initialization */ bootstrap_samples = [ ]; /* loop until the total number of required samples are reached */ whileTrue do /* choose random starting index in [1,...,N], N is the index of the last historical sample */ index = UniformRandom( 1, N ); /* actual blocksize follows a shifted geometric distribution with the expected value of exp_block_size */ blocksize = GeometricRandom( \(\frac{1}{exp\_block\_size}\) ); for(\(i=0;\ i<blocksize;\ i=i+1\) ) { /* if the chosen block exceeds the range of the historical data array, do a circular bootstrap */ ifindex + \(i>N\)then bootstrap_samples.append( historical_data[ index + i - N ] ); else bootstrap_samples.append( historical_data[ index + i ] ); end if if bootstrap_samples.len() == number_required then return bootstrap_samples; end if } ``` **Algorithm E.1**Pseudocode for stationary block bootstrap ### Effect of blocksize As discussed, we will use bootstrap resampling (Politis and Romano, 1994; Politis and White, 2004; Patton et al., 2009; Dichtl et al., 2016; Anarkulova et al., 2022), to analyze the performance of using the equal-weighted index compared to the cap-weighted index, during periods of high inflation (our concatenated series: 1940:8-1951:7, 1968:9-1985:10). First, we examine the effect of the expected blocksize parameter in the bootstrap resampling algorithm. We will use a paired sampling approach, where we simultaneously draw returns from the bond and stock indexes.21 The algorithm in Politis and White (2004) was developed for single asset time series. It is therefore important to assess the effect of the blocksize on numerical results. In Table E.1, we examine the effect of different blocksizes on the statistics of stationary block bootstrap resampling. Footnote 21: This preserves correlation effects. Perhaps a more visual way of analyzing the effect of the expected blocksize is shown in Figure E.1, where we show the cumulative distribution function (CDF) of the final wealth after 10 years, for different blocksizes. We show the CDF since this gives us a visualization of the entire final wealth distribution, not just a few summary statistics. Since the data frequency is at one-month intervals, specifying a geometric mean expected blocksize of one month means that the blocksize is always a constant one month. This effectively means that we are assuming that the data is i.i.d. However, the one-month results are an outlier, compared to the other choices of expected blocksize. There is hardly any difference between the CDFs for any choice of expected blocksize in the range of 3-24 months. In this article, we use an expected blocksize of 6 months. ### Bootstrapping from non-contiguous data segments As discussed in Section 3.2, we have identified two historical inflation regimes: 1940:8-1951:7 and 1968:9-1985:10. As traditional bootstrap methods assume one segment of the underlying data segment, it naturally becomes a question of how to bootstrap from two non-contiguous segments of data appropriately. In the main sections of the article, we first concatenate the two data segments, then treat the concatenated data samples as a complete segment and apply bootstrap methods to them. This method is in line with the work \begin{table} \begin{tabular}{l c c c c} \hline Expected blocksize (months) & Median\([W_{T}]\) & E\([W_{T}]\) & std\([W_{T}]\) & 5th Percentile \\ \hline 1 & 170.9 & 191.6 & 97.6 & 78.6 \\ 3 & 174.6 & 202.9 & 120.4 & 69.3 \\ 6 & 174.2 & 204.2 & 125.9 & 66.8 \\ 12 & 175.5 & 204.4 & 124.2 & 67.9 \\ 24 & 179.2 & 205.1 & 118.4 & 68.7 \\ \hline \end{tabular} \end{table} Table E.1: Effect of expected blocksize, on the statistics of the final wealth \(W(T)\) at \(T=10\) years. Constant weight, scenario in Table F.1. Equity weight: 0.7, rebalanced monthly. Bond index: 30-day T-bill. Equity index: equal-weighted. Concatenated series: 1940:8-1951:7, 1968:9-1985:10 (high-inflation regimes). All quantities are real (inflation-adjusted). Initial wealth 100. Bootstrap resampling, \(10,000\) resamples). Figure E.1: Cumulative distribution function (CDF), final wealth \(W(T)\) at \(T=10\) years. The effect of expected blocksize. Constant weight, scenario in Table F.1. Equity weight: 0.7, rebalanced monthly. Bond index: 30-day T-bill. Equity index: equal-weighted. Concatenated series: 1940:8-1951:7, 1968:9-1985:10 (high-inflation regimes). All quantities are real (inflation-adjusted). Initial wealth 100. Bootstrap resampling, expected blocksize one year, \(10,000\) resamples. of Anarkulova et al. (2022), in which the authors concatenate stock returns from different countries and bootstrap from the concatenated series. A second intuitive bootstrap method would be to bootstrap randomly from each of the two segments. Briefly, each bootstrap resample consists of (i) selecting a random segment (probability proportional to the length of the segment), (ii) selecting a random starting date in the selected segment, (iii) then selecting a block (of random size) of consecutive returns from this start date, (iv) in the event that the end of the data set in a segment is reached, we use circular block bootstrap resampling within that segment, and (v) repeating this process until a sample of the total desired length is obtained. We compare the bootstrapped data from concatenated segments and separate segments, by evaluating the performance of the \(70\%/30\%\) equal-weighted index/T-bill fixed-mix portfolio, using the investment scenario described in Table F.1. We can observe from Table E.2 that the strategy performance on bootstrap resampled data using two methods only varies slightly. This indicates that the two methods do not yield much difference for practical purposes. This is indeed expected - after all, the difference between the two methods only occurs when a random block crosses the edge of each of the segments. However, such a situation only occurs with a very low probability. Except for this low-probability situation, the two bootstrap methods are identical. ## Appendix F Comparing passive strategies in high inflation regimes In this section, we compare the performances of two fixed-mix strategies. The first strategy, the "EqWt" strategy, maintains a \(70\%\) allocation to the equal-weighted index, and \(30\%\) allocation to the \(30\)-day T-bill index. The second strategy, the "CapWt" strategy, maintains a \(70\%\) allocation to the cap-weighted index, and \(30\%\) allocation to the \(30\)-day T-bill index. Figure F.1 compares the CDF (cumulative distribution functions) of the terminal wealth of the EqWt strategy and the CapWt strategy based on \(10,000\) block bootstrap resampled data samples (Politis and Romano, 1991; Dichtl et al., 2016; Anarkulova et al., 2022) from the concatenated CRSP combined time series from 1940:8-1951:7 and 1968:9-1985:10, with an expected blocksize of six months. Both strategies \begin{table} \begin{tabular}{l c c} \hline \hline Investment horizon \(T\) (years) & 10 \\ Equity market indexes & CRSP cap-weighted/equal-weighted index (real) \\ Bond index & 30-day T-bill (U.S.) (real) \\ Index Samples & Concatenated 1940:8-1951:7, 1968:9-1985:10 \\ Initial portfolio wealth & 100 \\ Rebalancing frequency & Monthly \\ \hline \hline \end{tabular} \end{table} Table F.1: Investment scenario. \begin{table} \begin{tabular}{l c c c c} \hline \hline & Median[\(W_{T}\)] & E[\(W_{T}\)] & std[\(W_{T}\)] & 5th Percentile \\ \hline Bootstrap from concatenated segments & 174.2 & 204.2 & 125.9 & 66.8 \\ Bootstrap from separate segments & 176.9 & 208.0 & 132.4 & 65.4 \\ \hline \hline \end{tabular} \end{table} Table E.2: Effect of bootstrap method - bootstrap from concatenated segments vs bootstrap from separate segments, on the statistics of the final wealth \(W(T)\) at \(T=10\) years. Constant weight, scenario in Table F.1. Equity weight: \(0.7\), rebalanced monthly. Bond index: \(30\)-day T-bill. Equity index: equal-weighted. Concatenated series: 1940:8-1951:7, 1968:9-1985:10 (high-inflation regimes). All quantities are real (inflation-adjusted). Initial wealth \(100\). Bootstrap resampling, \(10,000\) resamples). assume an initial wealth of 100 with no further cash injections and withdrawals, the investment horizon is 10 years, with monthly rebalancing to maintain the constant weights in the portfolio, see also Table F.1. We first recall the concept of _partial stochastic dominance_. Suppose two investment strategies \(A\) and \(B\) are evaluated on a set of data samples under the same investment scenario. We consider the CDFs of terminal wealth \(W\) associated with both strategies. Specifically, we denote the CDF of strategy A by CDF\({}_{A}(W)\) and that of strategy B by CDF\({}_{B}(W)\). Let \(W_{T}\) be the random wealth at time \(T\) and \(W\) be a possible wealth realization, then we can interpret CDF\({}_{A}(W)\) as \[\text{CDF}_{A}(W) = Prob(W_{T}\leq W)\.\] (F.1) Following Atkinson (1987); van Staden et al. (2021), we define partial first-order stochastic dominance. **Definition F.1** (Partial first order stochastic dominance).: _Given an investment strategy A which generates a CDF of terminal wealth \(W\) given by CDF\({}_{A}(W)\), and a strategy B with CDF\({}_{B}(W)\), then strategy \(A\) partially stochastically dominates strategy B (to first order) in the interval \((W_{lo},W_{hi})\) if_ \[\text{CDF}_{A}(W) \leq \text{CDF}_{B}(W),\ \forall W\in(W_{lo},W_{hi})\] (F.2) _with strict inequality for at least one point in \((W_{lo},W_{hi})\)._ The arguments for relaxing the usual definition of stochastic dominance are given in Atkinson (1987); van Staden et al. (2021). Given some initial wealth \(W_{0}\), if \(W_{hi}\gg W_{0}\), then an investor may not be concerned that strategy \(A\) underperforms strategy \(B\) at these very high wealth values. In this case, the investor is fabulously wealthy. Suppose that \(W_{lo}\ll W_{0}\), Assume CDF\({}_{A}(W_{lo})=\text{CDF}_{B}(W_{lo})\). As an extreme example, suppose \(W_{lo}=\)2 cents. The fact that strategy \(B\) has a higher probability of ending up with one cent, compared with strategy \(A\) is cold comfort, and not particularly interesting. On the other hand, suppose CDF\((W_{lo})\ll 1\). Again, an investor may not be interested in events with exceptionally low probabilities. Remarkably, the EqWt strategy appears to partially stochastically dominate the CapWt strategy, since the CDF curve of the EqWt strategy almost appears to be entirely on the right side of the CDF curve of the CapWt strategy, except at very low probability values, see Atkinson (1987); van Staden et al. (2021) for a definition of stochastic partial dominance. Close examination shows that the curves cross at the point \(F_{EqWt}(W_{lo})=F_{CapWt}(W_{lo})\simeq.02\), with a slight underperformance of the EqWt strategy compared to the CapWt strategy in this extreme left tail. Figure F.1: Cumulative distribution function of final real wealth \(W\) at \(T=10\) years, bootstrap resampling expected blocksize six months, \(10,000\) resamples (Appendix D.1). \(T=10\) years. Data: concatenated returns, 1940:8-1951:7, 1968:9-1985:10. Scenario described in Table F.1. The fact that the EqWt strategy partially stochastically dominates the CapWt strategy seems to suggest that the equal-weighted stock index is the better choice for the stock index than the cap-weight stock index during high inflation times. We note that, using recent data22, the situation is not as clear (Taljaard and Mare, 2021), since the equal-weighted index appears to underperform. However, Taljaard and Mare (2021) suggests that this is due to the recent market concentration in tech stocks.23 In fact, a plausible explanation for the outperformance (historically) of an equal-weighted index is that this is simply due to the small-cap effect, which was not widely known until about 1981 (Banz, 1981). Plyakha et al. (2021) acknowledge that the equal-weighted index has significant exposure to the size factor. However, Plyakha et al. (2021) argue that the equal-weighted index also has a larger exposure to the value factor. In addition, there is a significant _alpha_ effect due to the contrarian strategy of frequent rebalancing to equal weights. It would appear to be simplistic to dismiss an equal weight strategy on the grounds that this is simply a small cap effect that has become less effective. Footnote 22: Since about 2010. Of course, this is outside a period of sustained high inflation. Footnote 23: As of February 2023, Apple, Microsoft, Amazon and Alphabet (A and C) in total comprised 17% of the market capitalization of the S&P 500. ## Appendix G Calibrated synthetic model parameters ## Appendix H Comparison of CD and CS objectives In this section, we numerically compare the CS objective function (2.13) with the CD objective function (2.11). As we briefly discussed in Section 2.3, one caveat of the CD objective function is that it not only penalizes the underperformance relative to the elevated target but also penalizes the outperformance over the elevated target. In practice, the outperformance of the elevated target is favorable, and managers may not want to penalize the strategy when it happens. Therefore, in such cases, the cumulative quadratic shortfall (CS) objective (2.13) and (2.14) may be more appropriate. For the remainder of the paper, we focus on the discrete-time CS problem with the LFNN parameterization and the equally-spaced rebalancing schedule \(T_{\Delta t}\) defined in Appendix C.1, i.e., \[(\text{Parameterized}\;CS(\beta)):\quad\inf_{\mathbf{\theta}\in\mathbb{R}^{N_{ \mathbf{\theta}}}}\mathbb{E}_{f(\cdot;\mathbf{\theta})}^{(t_{0},w_{0})}\Bigg{[}\sum_{ t\in\mathcal{T}_{\Delta t}}\Big{(}\min\big{(}W_{\mathbf{\theta}}(t)-e^{\beta t} \hat{W}(t),0\big{)}\Big{)}^{2}+\epsilon W_{\mathbf{\theta}}(T)\Bigg{]},\] (H.1) The CS objective function in (H.1) only penalizes the underperformance against the elevated target. Here \(\epsilon W(T)\) is a regularization term. We remark that problem (H.1) without the regularization term can be ill-posed. To see this, consider a case where \(W_{\mathbf{\theta}}(t)\gg e^{\beta t}\hat{W}(t)\), for some \(t\in[t_{0},T]\). In this case, the future cumulative quadratic shortfall (on \([t,T]\)) will almost surely be zero without the regularization term, so the control from thereon has no effect on the objective function under that scenario. We choose \(\epsilon\) to be a small positive scalar. As William Bernstein once said, "if you have won the game stop playing." If one has accumulated as much wealth as Warren Buffet, then it does not matter what assets she invests in. The positive regularization factor of \(\epsilon\) forces the strategy to put all wealth into less risky assets when the portfolio has already performed extremely well. \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline \(\mu_{1}\) & \(\sigma_{1}\) & \(\lambda_{1}\) & \(\nu_{1}\) & \(\iota_{1}\) & \(\varsigma_{1}\) & \(\mu_{2}\) & \(\sigma_{2}\) & \(\lambda_{2}\) & \(\nu_{2}\) & \(\iota_{2}\) & \(\varsigma_{2}\) & \(\rho\) \\ \hline 0.051 & 0.146 & 0.178 & 0.2 & 7.13 & 7.33 & -0.014 & 0.017 & 0.321 & 0 & N/A & 44.48 & 0.14 \\ \hline \hline \end{tabular} \end{table} Table G.1: Estimated annualized parameters for double exponential jump-diffusion model (C.3) from CRSP cap-weighted stock index, 30-day U.S. T-bill index deflated by the CPI. Sample period: concatenated 1940:8-1951:7 and 1968:9-1985:10. We design a numerical experiment to compare the CS objective with the symmetric CD objective in the following problem (H.2) with the same LFNN parameterization and equally-spaced rebalancing schedule \(\mathcal{T}_{\Delta t}\) \[(\text{Parameterized }CD(\beta)):\quad\inf_{\mathbf{\theta}\in\mathbb{R}^{N_{\mathbf{ \theta}}}}\mathbb{E}_{f(\cdot;\mathbf{\theta})}^{(t_{0},w_{0})}\Bigg{[}\sum_{t\in \mathcal{T}_{\Delta t}}\big{(}W_{\mathbf{\theta}}(t)-e^{\beta t}\hat{W}(t)\big{)}^{ 2}\Bigg{]}.\] (H.2) Specifically, we adopt the investment scenario in Table C.1 with \(\beta=0.02\) and reuse the training and testing data sets simulated from the calibrated double exponential jump-diffusion model. The neural network strategies follow the trained LFNN models on the training data set \(\mathbf{Y}\) for both the CS and CD objective. We then evaluated both strategies on the same testing data set \(\mathbf{Y}^{test}\). We compare the _wealth ratio_, i.e., the wealth of the managed portfolio divided by the wealth of the benchmark portfolio, over time, for both strategies. The wealth ratio metric reflects how well the active strategy performs against the benchmark strategy along the investment horizon; a higher wealth ratio metric is better. Below we show the percentiles of the wealth ratio for both strategies evaluated on \(\mathbf{Y}^{test}\). We can see from Figure H.1 that the CS strategy (neural network strategy trained under the CS objective) yields a more favorable wealth ratio than the CD strategy (neural network strategy trained under the CD objective). On average, the CS strategy achieves a consistently higher terminal wealth ratio than the CD strategy. Even in the 20th percentile case, the CS strategy lags initially but recovers over time.24 The result indicates that the CS objective might be a wiser choice for managers in practice. In the following numerical experiments with bootstrap resampled data, we use the CS objective (H.1) instead of the CD objective (H.2).25 Footnote 24: The CS strategy starts with a higher allocation to the stock, and thus encounters more volatility early on. Figure H.1: Percentiles of wealth ratio of the neural network strategy (i.e., the neural network model) learned under the cumulative quadratic tracking difference (CD) objective and the neural network strategy learned under the cumulative quadratic shortfall (CS) objective. The results shown are based on evaluations on the testing data set \(\mathbf{Y}^{test}\). Experiments with non-zero borrowing premium In Section 3, we conducted the numerical experiments, assuming the borrowing premium is zero. This assumption is based on the fact that large sovereign wealth funds are often considered to have almost risk-free credit ratings, due to their state-backed nature. In other words, we assume that sovereign wealth funds can borrow funding at the same rate as risk-free treasury bills. This assumption may be too benign for general public funds. In general, it is unlikely that a non-sovereign wealth fund can borrow at a risk-free rate. However, the actual borrowing cost within large public funds is often unavailable. For this reason, we use the corporate bond yields issued by corporations with similar credit ratings as these large public funds as an approximation to the borrowing cost. Currently, large public funds such as the Blackstone Group or Apollo Global Management are rated between Aaa and Baa rating by Moody's. We obtain the nominal corporate bond yields with Moody's Aaa (Moody's, 2023a) and Baa (Moody's, 2023b) ratings and adjust them with CPI returns. During the two high-inflation regimes we have identified, Aaa-rated corporate bonds have an average real yield of 0.7%, while Baa-rated bonds have 1.8%. Taking an average of the two, we use 1.25% as an estimate for the real yield of corporate bonds as well as the borrowing cost of large public funds.26 As discussed in Section D.3, the average real return for T-bill index is -1.4%. This gives us an average borrowing premium rate of 2.65%. In this section, as a stress test, we conduct the same experiment as in Section 3.3, except that we use a fixed borrowing premium of 3% instead of 0. We note that the historical corporate bond yields are based on bonds with a long maturity. Typically, long-term yields are higher than short-term yields, which accounts for the term risk. Therefore, the assumption of a 3% borrowing premium should be a fairly aggressive stress test for the use of leverage. Footnote 26: Note that the corporate bonds from Moody’s yield data have maturities of more than 20 years. Usually, long-term bonds have higher yields than short-term bonds. Thus, using corporate yields likely overestimates the borrowing cost, since we assume the manager is only borrowing short-term funding. Figure I.1: Percentiles of wealth ratio over the investment horizon, and CDF of terminal wealth ratio. Annualized borrowing premium is 3%. Results are based on the evaluation of the learned neural network model (from high-inflation data) on the testing data set (low-inflation data). We plot the percentiles of the wealth ratio and the CDF of the terminal wealth ratio. As we can see from Figure I.1 and Table I.1, the neural network strategy is only marginally affected by the increased borrowing premium rate. Specifically, the terminal wealth statistics for the case with the borrowing premium all slightly worse. However, the impact is so marginal that the median IRR does not change, and the neural network strategy still maintains more than a 200 bps advantage in terms of median IRR compared to the benchmark. The most noticeable difference is in the allocation fraction, as shown in Figure I.2. With a significantly higher borrowing cost, the neural network strategy does not leverage as much in the first two years, resulting in a less negative allocation to the T-bill and a lower allocation to the equal-weighted stock index. However, as we have seen in Figure I.1 and Table I.1, this only results in minimal impact on the performance of the strategy. inflation regimes (1940:8-1951:7 and 1968:9-1985:10) from the full historical data of 1926:1-2022:1 and obtain several low-inflation data segments. We concatenate the low-inflation data segments and use the stationary bootstrap (Appendix E.1) to generate a testing data set. We adopt the investment scenario described in Table 3.2 and evaluate the performance of the neural network strategy obtained in Section 3.3 on this low-inflation data set. Note that we continue to use the equal-weighted stock index/30-day T-bill fixed-mix portfolio as the benchmark. This is validated by Figure J.1, which plots the CDF of the terminal wealth of the fixed-mix portfolios using 70% equal-weighted stock index vs 70% cap-weighted stock index (both with 30% 30-day U.S. T-bill as the bond component). As we can see from Figure J.1, the fixed-mix portfolio with an equal-weighted stock index clearly has a more right-skewed distribution than the portfolio with a cap-weighted stock index. This seems to suggest that the equal-weighted index is the superior choice to use in the benchmark portfolio, even in low inflation regimes. We then present the performance of the neural network strategy learned on high-inflation data on the testing data set bootstrapped from low-inflation historical returns. Surprisingly, as we can see from Figure J.2a, the performance of the neural network strategy learned under high-inflation regimes performs quite well in low-inflation environments. Compared to the testing results on the high-inflation data set, there is a noticeable performance degradation; for example, the probability of outperforming the benchmark strategy in terminal wealth is now slightly less than 90%. However, the degradation is quite minimal. The neural network strategy still has more than an 85% chance of outperforming the benchmark strategy at the end of \begin{table} \begin{tabular}{l c c c c c} \hline Strategy & Median[\(W_{T}\)] & E[\(W_{T}\)] & std[\(W_{T}\)] & 5th Percentile & Median IRR (annual) \\ \hline Neural network & 429.7 & 489.6 & 301.9 & 151.6 & 0.100 \\ Benchmark & 368.3 & 420.8 & 238.2 & 175.7 & 0.079 \\ \hline \end{tabular} \end{table} Table J.1: Statistic of strategies. Results are based on the evaluation of the learned neural network model (from high-inflation data) on the low-inflation testing data set. Figure J.1: Cumulative distribution functions (CDFs) for cap-weighted and equal-weighted indexes, as a function of final real wealth \(W\) at \(T=10\) years. Initial stake \(W_{0}=100\), no cash injections or withdrawals. Block bootstrap resampling, expected blocksize 6 months. 70% stocks, 30% bonds, rebalanced monthly. Bond index: 30-day U.S. T-bills. Stock index: CRSP capitalization-weighted or CRSP equal-weighted index. Underlying data excludes high-inflation regimes. All indexes are deflated by the CPI. \(10,000\) resamples. Data set 1926:1-2022:1, excluding high inflation regimes (1940:8-1951:7 and 1968:9-1985:10). the investment horizon. As shown in Table 1, the median IRR of the neural network strategy is still 2% higher than the median IRR of the benchmark strategy, meeting the investment target. The above results indicate that the neural network strategy is surprisingly robust. Despite being specifically trained under a high-inflation scenario, the strategy performs admirably well in a low-inflation environment. Figure 2: Percentiles of wealth ratio over the investment horizon, and CDF of terminal wealth ratio. Results are based on the evaluation of the learned neural network model (from high-inflation data) on the low-inflation testing data set).
2302.12465
PaGE-Link: Path-based Graph Neural Network Explanation for Heterogeneous Link Prediction
Transparency and accountability have become major concerns for black-box machine learning (ML) models. Proper explanations for the model behavior increase model transparency and help researchers develop more accountable models. Graph neural networks (GNN) have recently shown superior performance in many graph ML problems than traditional methods, and explaining them has attracted increased interest. However, GNN explanation for link prediction (LP) is lacking in the literature. LP is an essential GNN task and corresponds to web applications like recommendation and sponsored search on web. Given existing GNN explanation methods only address node/graph-level tasks, we propose Path-based GNN Explanation for heterogeneous Link prediction (PaGE-Link) that generates explanations with connection interpretability, enjoys model scalability, and handles graph heterogeneity. Qualitatively, PaGE-Link can generate explanations as paths connecting a node pair, which naturally captures connections between the two nodes and easily transfer to human-interpretable explanations. Quantitatively, explanations generated by PaGE-Link improve AUC for recommendation on citation and user-item graphs by 9 - 35% and are chosen as better by 78.79% of responses in human evaluation.
Shichang Zhang, Jiani Zhang, Xiang Song, Soji Adeshina, Da Zheng, Christos Faloutsos, Yizhou Sun
2023-02-24T05:43:47Z
http://arxiv.org/abs/2302.12465v3
# PaGE-Link: Path-based Graph Neural Network Explanation for Heterogeneous Link Prediction ###### Abstract. Transparency and accountability have become major concerns for black-box machine learning (ML) models. Proper explanations for the model behavior increase model transparency and help researchers develop more accountable models. Graph neural networks (GNN) have recently shown superior performance in many graph ML problems than traditional methods, and explaining them has attracted increased interest. However, GNN explanation for link prediction (LP) is lacking in the literature. LP is an essential GNN task and corresponds to web applications like recommendation and sponsored search on web. Given existing GNN explanation methods only address node/graph-level tasks, we propose Path-based GNN Explanation for heterogeneous Link prediction (_PaGE-Link_) that generates explanations with _connection interpretability_, enjoys model _scalability_, and handles graph _heterogeneity_. Qualitatively, PaGE-Link can generate explanations as paths connecting a node pair, which naturally captures connections between the two nodes and easily transfer to human-interpretable explanations. Quantitatively, explanations generated by PaGE-Link improve AUC for recommendation on citation and user-item graphs by _9 - 35%_ and are chosen as better by _78.79%_ of responses in human evaluation. Model Transparency, Model Explanation, Graph Neural Networks, Link Prediction + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + grows from \(m\) to \(\sim\)\(2m\) compared to the node prediction task because neighbors of both the source and the target are involved. Since most existing methods consider all (edge-induced) subgraphs, the increased edges will scale the number of subgraph candidates by a factor of \(O(2^{m})\), which makes finding the optimal subgraph explanation much harder. 3) _Heterogeneity:_ Practical LP is often on heterogeneous graphs with rich node and edge types, e.g., a graph for recommendations can have user->buys->item edges and item->has->attribute edges, but existing methods only work for homogeneous graphs. In light of the importance and challenges of GNN explanation for LP, we formulate it as a post hoc and instance-level explanation problem and generate explanations for it in the form of important paths connecting the source node and the target node. Paths have played substantial roles in graph ML and are the core of many non-GNNLP methods (Gan et al., 2015; Liu et al., 2016; Wang et al., 2017; Wang et al., 2018). Paths as explanations can solve the connection interpretability and scalability challenges. Firstly, paths connecting two nodes naturally explain connections between them. Figure 1 shows an example on a graph for recommendations. Given a GNN and a predicted link between user \(u_{1}\) and item \(i_{1}\), human-interpretable explanations may be based on the user's preference of attributes (e.g., user \(u_{1}\) bought item \(i2\) that shared the same attribute \(a_{1}\) as item \(i_{1}\)) or collaborative filtering (e.g, user \(u_{1}\) had a similar preference as user \(u_{2}\) because they both bought item \(i_{3}\) and user \(u_{2}\) bought item \(i_{1}\), so that user \(u_{1}\) would like item \(i_{1}\)). Both explanations boil down to paths. Secondly, paths have a considerably smaller search space than general subgraphs. As we will see in Proposition 4.1, compared to the expected number of edge-induced subgraphs, the expected number of paths grows strictly slower and becomes negligible. Therefore, path explanations exclude many less-meaningful subgraph candidates, making the explanation generation much more straightforward and accurate. To this end, we propose Path-based GNN Explanation for heterogeneous Link prediction (PaGE-Link), which achieves a better explanation AUC and scales linearly in the number of edges (see Figure 2). We first perform _k-core pruning_(Beng et al., 2015) to help find paths and improve scalability. Then we do _heterogeneous path-enforcing_ mask learning to determine important paths, which handles heterogeneity and enforces the explanation edges to form paths connecting source to target. In summary, the contributions of our method are: * **Connection Interpretability:** PaGE-Link produces more interpretable explanations in path forms and quantitatively improves explanation AUC over baselines. * **Scalability:** PaGE-Link reduces the explanation search space by magnitudes from subgraph finding to path finding and scales linearly in the number of graph edges. * **Heterogeneity:** PaGE-Link works on heterogeneous graphs and leverages edge-type information to generate better explanations. ## 2. Related Work We review relevant research on (a) GNNs (b) GNN explanation (c) recommendation explanation and (d) paths for LP. We summarize the properties of PaGE-Link vs. representative methods in Table 1. GNNs.GNNs are a family of ML models on graphs (Gan et al., 2015; Wang et al., 2017; Wang et al., 2018). They take graph structure and node/edge features as input and output node representations by transforming and aggregating features of nodes' (multi-hop) neighbors. The node representations can be used for LP and achieved great results on LP applications (Gan et al., 2015; Wang et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). We review GNN-based LP models in Section 3. GNN explanation.GNN explanation was studied for node and graph classification, where the explanation is defined as an important subgraph. Existing methods majorly differ in their definition of importance and subgraph selection methods. GNNExplainer (GNNExplainer, 2018) selects edge-induced subgraphs by learning fully parameterized masks on graph edges and node features, where the mutual information (MI) between the masked graph and the prediction made with the original graph is maximized. PGEExplainer (GNNExplainer, 2018) adopts the same MI importance but trains a mask predictor to generate a discrete mask instead. Other popular importance measures are game theory values. SubgraphX (Wang et al., 2018) uses the Shapley value (Shapiro et al., 2018) and performs Monte Carlo Tree Search (MCTS) on subgraphs. GStarX (GStarX, 2018) uses a structure-aware HN value (Gan et al., 2015) to measure the importance of nodes and generates the important-node-induced subgraph. There are more studies from other perspectives that are less related to this work, i.e., surrogate models (Gan et al., 2015; Wang et al., 2018), counterfactual explanations (Wang et al., 2018), and causality (Wang et al., 2018; Wang et al., 2018), for which (Wang et al., 2018) provides a good review. While these methods produce subgraphs as explanations, what makes a good explanation is a complex topic, especially how to meet "stakeholders' desiderata" (Gan et al., 2015). Our work differs from all above since we focus on a new task of explaining heterogeneous LP, and we generate paths instead of unrestricted subgraphs as explanations. The interpretability of paths makes our method advantaged especially when stakeholders have less ML background. Figure 1. Given a GNN model and a predicted link \((u_{1},i_{1})\) (dashed red) on a heterogeneous graph of user \(u\), item \(i\), and attribute \(a\) (left). PaGE-Link generates two path explanations (green arrows). Interpretations illustrated on the right. Figure 2. (a) PaGE-Link outperforms GNNExplainer and PGExplainer in terms of explanation AUC on the citation graph and the user-item graph. (b) The running time of PaGE-Link scales linearly in the number of graph edges. _Recommendation explanation._ This line of works explains why a recommendation is made (Sohn et al., 2017). J-RECS (Kang et al., 2018) generates recommendation explanations on product graphs using a justification score that balances item relevance and diversity. PRINCE (Bordes and McAllester, 2016) produces end-user explanations as a set of minimal actions performed by the user on graphs with users, items, reviews, and categories. The set of actions is selected using counterfactual evidence. Typically, recommendations on graphs can be formalized as an LP task. However, the recommendation explanation problem differs from explaining GNNs for LP because the recommendation data may not be graphs, and the models to be explained are primarily not GNN-based (Sohn et al., 2017). GNNs have their unique message passing procedure, and GNN-based LP corresponds to more general applications beyond recommendation, e.g., drug repurposing (Kipf and Welling, 2017), and knowledge graph completion (Bordes and McAllester, 2016; Kang et al., 2018). Thus, recommendation explanation is related to but not directly comparable to GNN explanation. _Paths._ Paths are important in graph ML, and many LP methods are path-based, such as graph distance (Kang et al., 2018), Katz index (Katz, 1976), SimRank (Katz, 1976), and PathSim (Sohn et al., 2017). Paths have also been used to capture the relationship between a pair of nodes. For example, the "connection subgraphs" (Bordes and McAllester, 2016) find paths between the source and the target based on electricity analogs. In general, although black-box GNNs recently outperform path-based methods in LP accuracy, we embrace paths for their interpretability for LP explanation. ## 3. Notations and preliminary In this section, we define necessary notations, summarize them in Table 2, and review the GNN-based LP models. **Definition 3.1**.: A heterogeneous graph is defined as a directed graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) associated with a node type mapping function \(\phi:\mathcal{V}\rightarrow\mathcal{A}\) and an edge type mapping function \(\tau:\mathcal{E}\rightarrow\mathcal{R}\). Each node \(v\in\mathcal{V}\) belongs to one node type \(\phi(v)\in\mathcal{A}\) and each edge \(e\in\mathcal{E}\) belongs to one edge type \(\tau(e)\in\mathcal{R}\). Let \(\Phi(\cdot,\cdot)\) denote a trained GNN-based model for predicting the missing links in \(\mathcal{G}\), where a prediction \(Y=\Phi(\mathcal{G},(s,t))\) denotes the predicted link between a source node \(s\) and a target node \(t\). The model \(\Phi\) learns a conditional distribution \(P_{\Phi}(Y|\mathcal{G},(s,t))\) of the binary random variable \(Y\). The commonly used GNN-based LP models (Sohn et al., 2017; Sohn et al., 2017; Sohn et al., 2017) involve two steps. The first step is to generate node representations \((\mathbf{h}_{s},\mathbf{h}_{t})\) of \((s,t)\) with an \(L\)-hop GNN encoder. The second step is to apply a prediction head on \((\mathbf{h}_{s},\mathbf{h}_{t})\) to get the prediction of \(Y\). An example prediction head is an inner product. To explain \(\Phi(\mathcal{G},(s,t))\) with an \(L\)-Layer GNN encoder, we restrict to the _computation graph_\(\mathcal{G}_{c}=(\mathcal{V}_{c},\mathcal{E}_{c})\). \(\mathcal{G}_{c}\) is the \(L\)-hop ego-graph of the predicted pair \((s,t)\), i.e., the subgraph with node set \(\mathcal{V}_{c}=\{v\in V|dist(v,s)\leq L\text{ or }dist(v,t)\leq L\}\). It is called a computation graph because the \(L\)-layer GNN only collects messages from the \(L\)-hop neighbors of \(s\) and \(t\) to compute \(\mathbf{h}_{s}\) and \(\mathbf{h}_{t}\). The LP result is thus fully determined by \(\mathcal{G}_{c}\), i.e., \(\Phi(\mathcal{G},(s,t))\equiv\Phi(\mathcal{G}_{c},(s,t))\). Figure 2(b) shows a 2-hop ego-graph of \(u_{1}\) and \(i_{1}\), where \(u_{3}\) and \(a_{3}^{3}\) are excluded since they are more than 2 hops away from either \(u_{1}\) or \(i_{1}\). ## 4. Proposed problem formulation: link-prediction explanation In this work, we address a _post hoc_ and _instance-level_ GNN explanation problem. The post hoc means the model \(\Phi(\cdot,\cdot)\) has been trained. To generate explanations, we won't change its architecture or parameters. The instance level means we generate an explanation for the prediction of each instance \((s,t)\). Specifically, the explanation method answers the question of why a missing link is predicted by \(\Phi(\cdot,\cdot)\). In a practical web recommendation system, this question can be "_why an item is recommended to a user by the model_". An explanation for a GNN prediction should be some substructure in \(\mathcal{G}_{c}\), and it should also be concise, i.e., limited by a size budget \(B\). This is because an explanation with a large size is often neither informative nor interpretable, for example, an extreme case is that \(\mathcal{G}_{c}\) could be a non-informative explanation for itself. Also, a fair comparison between different explanations should consume the same budget. In the following, we define budget \(B\) as the maximum number of edges included in the explanation. We list three desirable properties for a GNN explanation method on heterogeneous LP: capturing the connection between the source node and the target node, scalable to large graphs, and addressing graph heterogeneity. Using a path-based method inherently possesses all the properties. Paths capture the connection between a pair of nodes and can be transferred to human-interpretable explanations. Besides, the search space of paths with the fixed source node and the target node is greatly reduced compared to edge-induced subgraphs. Given the ego-graph \(\mathcal{G}_{c}\) of \(s\) and \(t\), the number of paths between \(s\) and \(t\) and the number of edge-induced subgraphs in \(\mathcal{G}_{c}\) both rely on the structure of \(\mathcal{G}_{c}\). However, they can \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{\(\mathcal{G}=(\mathcal{V},\mathcal{E})\)} & \multirow{2}{*}{a heterogeneous graph \(\mathcal{G}\), node set \(\mathcal{V}\), and edge set \(\mathcal{E}\)} \\ & & & & & & & & \\ \hline On Graphs & ✓ & ✓ & ✓ & ✓ & ✓ &? & ✓ \\ Explains GNN & ✓ & ✓ & ✓ & ✓ & & & \\ Explains LP &? &? &? &? & ✓ & ✓ & ✓ \\ Connection & & & &? &? &? &? & ✓ \\ Scalability & ✓ & ✓ & & &? &? &? & ✓ \\ Heterogeneity & & & & ✓ & ✓ & ✓ &? & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1. Methods and desired explanation properties. A question mark (?) means “unclear”, or “maybe, after non-trivial extensions”. “Rec. Exp.” stands for the general recommendation explanation methods. \begin{table} \begin{tabular}{l|l} \hline \hline Notation & Definition and description \\ \hline \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) & a heterogeneous graph \(\mathcal{G}\), node set \(\mathcal{V}\), and edge set \(\mathcal{E}\) \\ \(\phi:\mathcal{V}\rightarrow\mathcal{A}\) & a node type mapping function \\ \(\tau:\mathcal{E}\rightarrow\mathcal{R}\) & an edge type mapping function \\ \(D_{u}\) & the degree of node \(v\in\mathcal{V}\) \\ \(\mathcal{E}^{r}\) & edges with type \(r\in\mathcal{R}\), i.e., \(\mathcal{E}^{r}=\{e\in\mathcal{E}|\tau(e)=r\}\) \\ \((\Phi,\cdot,\cdot)\) & the GNN-based LP model to explain \\ \((s,t)\) & the source and target node for the predicted link \\ \(\mathbf{h}_{s}\triangleq\mathbf{h}_{t}\) & the node representations for \(s\triangleq t\) \\ \(Y=\Phi(\mathcal{G},(s,t))\) & the link prediction of the node pair \((s,t)\) \\ \(\mathcal{G}_{c}=(\mathcal{V}_{c},\mathcal{E}_{c})\) & the computation graph, i.e., L-hop ego-graph of \((s,t)\) \\ \hline \hline \end{tabular} \end{table} Table 2. Notation table be estimated using random graph approximations. The next proposition on random graphs shows that the expected number of paths grows strictly slower than the expected number of edge-induced subgraphs as the random graph grows. Also, the expected number of paths becomes insignificant for large graphs. **Proposition 4.1**.: _Let \(\mathcal{G}(n,d)\) be a random graph with \(n\) nodes and density \(d\), i.e., there are \(m=dbinom{n}{2}\) edges chosen uniformly randomly from all node pairs. Let \(Z_{n,d}\) be the expected number of paths between any pair of nodes. Let \(S_{n,d}\) be the expected number of edge-induced subgraphs. Then \(Z_{n,d}=o(S_{n,d})\), i.e., \(\lim_{n\to\infty}\frac{Z_{n,d}}{S_{n,d}}=0\)._ Proof.: In Appendix A. Paths are also a natural choice for LP explanations on heterogeneous graphs. On homogeneous graphs, features are important for prediction and explanation. A \(s\)-\(t\) link may be predicted because of the feature similarity of node \(s\) and node \(t\). However, the heterogeneous graphs we focus on, as defined in Definition 3.1, often do not store feature information but explicitly model it using new node and edge types. For example, for the heterogeneous graph in Figure 2(a), instead of making it a user-item graph and assigning each item node a two-dimensional feature with attributes \(a^{1}\) and \(a^{2}\), the attribute nodes are explicitly created and connected to the item nodes. Then an explanation like "\(i_{1}\) and \(i_{2}\) share node feature \(a_{1}^{1_{*}}\) on a homogeneous graph is transferred to "\(i_{1}\) and \(i_{2}\) are connected through the attribute node \(a_{1}^{1_{*}}\) on a heterogeneous graph. Given the advantages of paths over general subgraphs on connection interpretability, scalability, and their capability to capture feature similarity on heterogeneous graphs, we use paths to explain GNNs for heterogeneous LP. Our design principle is that a good explanation should be concise and informative, so we define the explanation to contain only _short_ paths _without high-degree_ nodes. Long paths are less desirable since they could correspond to unnecessarily complicated connections, making the explanation neither concise nor convincing. For example, in Figure 2(c), the long path \((u_{1},i_{3},a_{2}^{1},i_{2},a_{1}^{1},i_{1})\) is not ideal since it takes four hops to go from item \(i_{3}\) to the item \(i_{1}\), making it less persuasive to be interpreted as "item1 and item3 are similar so item1 should be recommended". Paths containing high-degree nodes are also less desirable because high-degree nodes are often generic, and a path going through them is not as informative. In the same figure, all paths containing node \(a_{2}^{1}\) are less desirable because \(a_{2}^{1}\) has a high degree and connects to all the items in the graph. A real example of a generic attribute is the attribute "grocery" connecting to both "vanilla ice cream" and "vanilla cookie". When "vanilla ice cream" is recommended to a person who bought "vanilla cookie", explaining this recommendation with a path going through "grocery" is not very informative since "grocery" connects many items. In contrast, a good informative path explanation should go through the attribute "vanilla", which only connects to vanilla-flavored items and has a much lower degree. We formalize the GNN explanation for heterogeneous LP as: **Problem 4.2**.: Generating path-based explanations for a predicted link between node \(s\) and \(t\): * a trained GNN-based LP model \(\Phi(\cdot,\cdot)\), * a heterogeneous computation graph \(\mathcal{G}_{c}\) of \(s\) and \(t\), * a budget \(B\) of the maximum number of edges in the explanation, * **Find** an explanation \(\mathcal{P}=\{p|p\) is a \(s\)-\(t\) path with maximum length \(l_{max}\) and degree of each node less than \(D_{max}\}\), \(|\mathcal{P}|l_{max}\leq B\), * **By optimizing \(p\in\mathcal{P}\)** to be influential to the prediction, concise, and informative. ## 5. Proposed Method: Page-Link This section details PaGE-Link. PaGE-Link has two modules: (i) a \(k\)-core pruning module to eliminate spurious neighbors and improve speed, and (ii) a heterogeneous path-enforcing mask learning module to identify important paths. An illustration is in Figure 3. ### The k-core Pruning The _\(k\)-core pruning_ module of PaGE-Link reduces the complexity of \(\mathcal{G}_{c}\). The \(k\)-core of a graph is defined as the unique maximal subgraph with a minimum node degree \(k\)((k)\)(Barb et al., 2016). We use the superscript \(k\) to denote the \(k\)-core, i.e., \(\mathcal{G}_{c}^{k}=(\mathcal{G}_{c}^{k},\mathcal{V}_{c}^{k})\) for the \(k\)-core of \(\mathcal{G}_{c}\). The \(k\)-core pruning is a recursive algorithm that removes nodes \(v\in\mathcal{V}\) such that their degrees \(D_{v}<k\), until the remaining subgraph only has nodes with \(D_{v}\geq k\), which gives the \(k\)-core. The difference in nodes between a (\(k+1\))-core and a \(k\)-core is called the \(k\)-shell. The nodes in the orange box of Figure 2(b) is an example of a \(2\)-core pruned from the \(2\)-hop ego-graph, where node \(a_{1}^{2}\) and \(a_{2}^{2}\) are pruned in the first iteration because they are degree one. Node \(i\)s is recursively pruned because it becomes degree one after node Figure 3. PaGE-Link on a graph with user nodes \(u\), item nodes \(i\), and two attribute types \(a^{1}\) and \(a^{2}\). (Best viewed in color.) is pruned. All those three nodes belong to the 1-shell. We perform \(k\)-core pruning to help path finding because the pruned \(k\)-shell nodes are unlikely to be part of meaningful paths when \(k\) is small. For example, the 1-shell nodes are either leaf nodes or will become leaf nodes during the recursive pruning, which will never be part of a path unless \(s\) or \(t\) are one of these 1-shell nodes. The \(k\)-core pruning module in PaGE-Link is modified from the standard \(k\)-core pruning by adding a condition of never pruning \(s\) and \(t\). The following theorem shows that for a random graph \(\mathcal{G}(n,d)\), \(k\)-core will reduce the expected number of nodes by a factor of \(\delta_{\mathcal{V}}(n,d,k)\) and reduce the expected number of edges by a factor of \(\delta_{\mathcal{E}}(n,d,k)\). Both factors are functions of \(n\), \(d\), and \(k\). We defer the exact expressions of these two factors in Appendix B, since they are only implicitly defined based on Poisson distribution. Numerically, for a random \(\mathcal{G}(n,d)\) with average node degree \(d(n-1)=7\), its 5-core has \(\delta_{\mathcal{V}}(n,d,5)\) and \(\delta_{\mathcal{E}}(n,d,5)\) both \(\approx 0.69\). **Theorem 5.1** (Pittel, Spencer and Wormald (Pittel, Spencer and Wormald, 2015)).: _Let \(\mathcal{G}(n,d)\) be a random graph with \(m\) edges as in Proposition 4.1. Let \(\mathcal{G}^{k}(n,d)=(\mathcal{V}^{k}(n,d),\mathcal{E}^{k}(n,d))\) be the nonempty \(k\)-core of \(\mathcal{G}(n,d)\). Then \(\mathcal{G}^{k}(n,d)\) contain \(\delta_{\mathcal{V}}(n,d,k)n\) nodes and \(\delta_{\mathcal{E}}(n,d,k)m\) edges with high probability for large \(n\), i.e., \(|\mathcal{V}^{k}(n,d)|/n\stackrel{{ p}}{{\rightarrow}}\delta_{ \mathcal{V}}(n,d,k)\) and \(|\mathcal{E}^{k}(n,d)|/m\stackrel{{ p}}{{\rightarrow}}\delta_{ \mathcal{E}}(n,d,k)\) (\(\stackrel{{ p}}{{\rightarrow}}\) stands for convergence in probability)._ Proof.: Please refer to Appendix B and (Pittel, Spencer and Wormald, 2015). The \(k\)-core pruning helps reduce the graph complexity and accelerates path finding. One concern is whether it prunes too much and disconnects \(s\) and \(t\). We found that such a situation is very unlikely to happen in practice. To be specific, we focus on explaining positively predicted links, e.g. why an item is recommended to a user by the model. Negative predictions, e.g., why an arbitrary item is not recommended to a user by the model, are less useful in practice and thus not in the scope of our explanation. \((s,t)\) node pairs are usually connected by many paths in a practical \(\mathcal{G}\)(Kolmogorov, 2008), and positive link predictions are rarely made between disconnected or weakly-connected \((s,t)\). Empirically, we observe that there are usually too many paths connecting a positively predicted \((s,t)\) instead of no paths, even in the \(k\)-core. Therefore, an optional step to enhance pruning is to remove nodes with super-high degrees. As we discussed in Section 4, high-degree nodes are often generic and less informative. Removing them can be a complement to k-core to further reduce complexity and improve path quality. ### Heterogeneous Path-Enforcing Mask Learning The second module of PaGE-Link learns heterogeneous masks to find important path-forming edges. We perform mask learning to select edges from the \(k\)-core-pruned computation graph. For notation simplicity in this section, we use \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) to denote the graph for mask learning to save superscripts and subscripts, and \(\mathcal{G}^{k}_{\mathcal{E}}\) is the actual graph in the complete version of our algorithm. The idea is to learn a mask over all edges of all edge types to select the important edges. Let \(\mathcal{E}^{r}=\{e\in\mathcal{E}|\tau(e)=r\}\) be edges with type \(r\in\mathcal{R}\). Let \(\mathcal{M}=\{\mathcal{M}^{r}\}_{r=1}^{|\mathcal{R}|}\) be learnable masks of all edge types, with \(\mathcal{M}^{r}\in\mathbb{R}^{|\mathcal{E}^{r}|}\) corresponds type \(r\). We denote applying \(\mathcal{M}^{r}\) on its corresponding edge type by \(\mathcal{E}^{r}\odot\sigma(\mathcal{M}^{r})\), where \(\sigma\) is the sigmoid function, and \(\odot\) is the element-wise product. Similarly, we also overload the notation \(\odot\) to indicate applying the set of masks on all types of edges, i.e., \(\mathcal{E}\odot\sigma(\mathcal{M})=\cup_{r\in\mathcal{R}}\{\mathcal{E}^{r} \odot\sigma(\mathcal{M}^{r})\}\). We call the graph with the edge set \(\mathcal{E}\odot\sigma(\mathcal{M})\) a _masked graph_. Applying a mask on graph edges will change the edge weights, which makes GNNs pass more information between nodes connected by highly-weighted edges and less on others. The general idea of mask learning is to learn an \(\mathcal{M}\) that produces high weights for important edges and low weights for others. To learn an \(\mathcal{M}\) that better fits the LP explanation, we measure edge importance from two perspectives: important edges should be both influential for the model prediction and form meaningful paths. Below, we introduce two loss terms \(\mathcal{L}_{pred}\) and \(\mathcal{L}_{path}\) for achieving these two measurements. \(\mathcal{L}_{pred}\) is to learn to select influential edges for model prediction. The idea is to do a perturbation-based explanation, where parts of the input are considered important if perturbing them changes the model prediction significantly. In the graph sense, if removing an edge \(e\) significantly influences the prediction, then \(e\) is a critical counterfactual edge that should be part of the explanation. This idea can be formalized as maximizing the mutual information between the masked graph and the original graph prediction \(Y\), which is equivalent to minimizing the prediction loss \[\mathcal{L}_{pred}(\mathcal{M})=-\log P_{\Phi}(Y=1|\mathcal{G}=(\mathcal{V}, \mathcal{E}\odot\sigma(\mathcal{M})),(s,t)). \tag{1}\] \(\mathcal{L}_{pred}(\mathcal{M})\) has a straightforward meaning, which says the masked subgraph should provide enough information for predicting the missing link \((s,t)\) as the whole graph. Since the original prediction is a constant, \(\mathcal{L}_{pred}(\mathcal{M})\) can also be interpreted as the performance drop after the mask is applied to the graph. A well-masked graph should give a minimum performance drop. Regularizations of the mask entropy and mask norm are often included in \(\mathcal{L}_{pred}(\mathcal{M})\) to encourage the mask to be discrete and sparse. \(\mathcal{L}_{path}\) is the loss term for \(\mathcal{M}\) to learn to select path-forming edges. The idea is to first identify a set of candidate edges denoted by \(\mathcal{E}_{path}\) (specified below), where these edges can form concise and informative paths, and then optimize \(\mathcal{L}_{path}(\mathcal{M})\) to enforce the mask weights for \(e\in\mathcal{E}_{path}\) to increase and mask weights for \(e\notin\mathcal{E}_{path}\) to decrease. We considered a weighted average of these two forces balanced by hyperparameters \(\alpha\) and \(\beta\), \[\mathcal{L}_{path}(\mathcal{M})=-\sum_{r\in\mathcal{R}}(\alpha\sum_{ \begin{subarray}{c}e\in\mathcal{E}_{path}\\ \tau(e)=r\end{subarray}}\mathcal{M}^{r}_{e}-\beta\sum_{\begin{subarray}{c}e \in\mathcal{E},\mathcal{E}\notin\mathcal{E}_{path}\\ \tau(e)=r\end{subarray}}\mathcal{M}^{r}_{e}). \tag{2}\] The key question for computing \(\mathcal{L}_{path}(\mathcal{M})\) is to find a good \(\mathcal{E}_{path}\) containing edges of concise and informative paths. As in Section 4, paths with these two desired properties should be short and without high-degree generic nodes. We thus define a score function of a path \(p\) reflecting these two properties as below \[Score(p) =\log\prod_{\begin{subarray}{c}e\in p\\ \mathbf{e}=(u,\sigma)\end{subarray}}\frac{P(e)}{D_{o}}=\sum_{\begin{subarray}{c}e \in p\\ e=(u,\sigma)\end{subarray}}Score(e), \tag{4}\] \[Score(e) =\log\sigma(\mathcal{M}^{r}_{e}(e))-\log(D_{b}). \tag{3}\] In this score function, \(\mathcal{M}\) gives the probability of \(e\) to be included in the explanation, i.e., \(P(e)=\sigma(\mathcal{M}^{r}_{e}(e))\). To get the importance of a path, we first use a mean-field approximation for the joint probability by multiplying \(P(e)\) together, and we normalize each \(P(\epsilon)\) for edge \(e=(u,e)\) by its target node degree \(D_{v}\). Then, we perform log transformation, which improves numerical stability for multiplying many edges with small \(P(e)\) or large \(D_{v}\) and break down a path score to a summation of edge scores \(Score(e)\) that are easier to work with. This path score function captures both desired properties mentioned above. A path score will be high if the edges on it have high probabilities and these edges are linked to nodes with low degrees. Finding paths with the highest \(Score(p)\) can be implemented using Dijkstra's shortest path algorithm (Dijkstra, 2007), where the distance represented by each edge is set to be the negative score of the edge, i.e., \(-Score(e)\). We let \(\mathcal{E}_{path}\) be the set of edges in the top five shortest paths found by Dijkstra's algorithm. ### Mask Optimization and Path Generation We optimize \(\mathcal{M}\) with both \(\mathcal{L}_{pred}\) and \(\mathcal{L}_{path}\). \(\mathcal{L}_{pred}\) will increase the weights of the prediction-influential edges. \(\mathcal{L}_{path}\) will further increase the weights of the path-forming edges that are also highly weighted by the current \(\mathcal{M}\) and decrease other weights. Finally, after the mask learning converges, we run one more shortest-path algorithm to generate paths from the final \(\mathcal{M}\) and select the top paths according to budget \(B\) to get the explanation \(\mathcal{P}\) defined in Section 4. A pseudo-code of PaGE-Link is shown in Algorithm 1. ``` Input: heterogeneous graph \(\mathcal{G}\), trained GNN-based LP model \(\Phi(\cdot,\cdot)\), predicted link \((s,t)\), size budget \(B\), k for k-core, hyperparameters \(\alpha\) and \(\beta\), learning rate \(\eta\), maximum iterations \(T\). Output: Explanation as a set of paths \(\mathcal{P}\). Extract the computation graph \(\mathcal{G}_{c}\); Prune \(\mathcal{G}_{c}\) for the k-core \(\mathcal{G}_{c}^{k}\); Initialize \(\mathcal{M}^{(0)}\); \(t=0\); while\(\mathcal{M}^{(t)}\) not converge and \(t<T\)do Compute \(\mathcal{L}_{pred}(\mathcal{M}^{(t)})\); \(\triangleright\) Eq.(1) Compute \(Score(e)\) for each edge \(e\); \(\triangleright\) Eq.(4) Construct \(\mathcal{E}_{path}\) by finding shortest paths on \(\mathcal{G}_{c}^{k}\) with edge distance \(-Score(e)\); Compute \(\mathcal{L}_{path}(\mathcal{M}^{(t)})\) according to \(\mathcal{E}_{path}\); \(\triangleright\) Eq.(2) \(\mathcal{M}^{(t+1)}=\mathcal{M}^{(t)}-\eta\nabla(\mathcal{L}_{pred}(\mathcal{M}^ {(t)})+\mathcal{L}_{path}(\mathcal{M}^{(t)}))\); t \(\leftarrow\) 1; endwhile \(\mathcal{P}\) = Under budget \(B\), the top shortest paths on \(\mathcal{G}_{c}^{k}\) with edge distance \(-Score(e)\); Return:\(\mathcal{P}\). ``` **Algorithm 1** PaGE-Link ### Complexity Analysis In Table 3, we summarize the time complexity of PaGE-Link and representative existing methods for explaining a prediction with computation graph \(\mathcal{G}_{c}=(\mathcal{V}_{c},\mathcal{E}_{c})\) on a full graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\). Let \(T\) be the mask learning epochs. GNNExplainer has complexity \(|\mathcal{E}_{c}|T\) as it learns a mask on \(\mathcal{E}_{c}\). PGExplainer has a training stage and an inference stage (separated by \(/\) in the table). The inference stage is linear in \(|\mathcal{E}_{c}|\), but the training stage covers edges in the entire graph and thus scales in \(O(|\mathcal{E}|T)\). SubgraphX has a much higher time complexity exponential in \(|\mathcal{V}_{c}|\), so a size budget of \(B_{node}\) nodes is forced to replace \(|\mathcal{V}_{c}|\), and \(\hat{D}=\max_{q\in\mathcal{V}}D_{q}\) denotes the maximum degree (derivation in Appendix). For PaGE-Link, the k-core pruning step is linear in \(|\mathcal{E}_{c}|\). The mask learning with Dijkstra's algorithm has complexity \(|\mathcal{E}_{c}^{k}|T\). PaGE-Link has a better complexity than existing methods since \(|\mathcal{E}_{c}^{k}|\) is usually smaller than \(|\mathcal{E}_{c}|\) (see Theorem 5.1), and PaGE-Link often converges faster, i.e., has a smaller \(T\), as the space of candidate explanations is smaller (see Proposition 4.1) and noisy nodes are pruned. ## 6. Experiments In this section, we conduct empirical studies to evaluate explanations generated by PaGE-Link. Evaluation is a general challenge when studying model explainability since standard datasets do not have ground truth explanations. Many works (Zhu et al., 2017; Wang et al., 2018) use synthetic benchmarks, but no benchmarks are available for evaluating GNN explanations for heterogeneous LP. Therefore, we generate an augmented graph and a synthetic graph to evaluate explanations. They allow us to generate ground truth explanation patterns and evaluate explainers quantitatively. ### Datasets _The augmented graph._AugCitation is constructed by augmenting the AMiner citation network (Zhu et al., 2017). A graph schema is shown in Figure 3(a). The original AMiner graph contains four node types: author, paper, reference (ref), and field of study (fos), and edge types "cites", "writes", and "in". We construct AugCitation by augmenting the original graph with new (author, paper) edges typed "likes" and define a paper recommendation task on AugCitation for predicting the "like" edges. A new edge \((s,t)\) is augmented if there is at least one concise and informative path \(p\) between them. In our augmentation process, we require the paths \(p\) to have lengths shorter than a hyperparameter \(l_{max}\) and with degrees of nodes on \(p\) (excluding \(s\)\(\&\)\(t\)) bounded by a hyperparameter \(D_{max}\). We highlight these two hyperparameters because of the conciseness and informativeness principles discussed in Section 4. The augmented edge \((s,t)\) is used for prediction. The ground truth explanation is the set of paths satisfying the two hyperparameter requirements. We only take the top \(P_{max}\) paths with the smallest degree sums if there are many qualified paths. We train a GNN-based LP model to predict these new "likes" edges and evaluate explainers by comparing their output explanations with these path patterns as ground truth. _The synthetic graph._UserItemAttr is generated to mimic graphs with users, items, and attributes for recommendations. Figure 3(b) shows the graph schema and illustrates the generation process. We include three node types: "user", "item", and item attributes ("attr") in the synthetic graph, and we build different types of edges step by step. Firstly, the "has" edges are created by randomly connecting items to attrs, and the "hidden prefers" edges are created by randomly connecting users to attrs. These edges represent items having attributes and user preferences for these attributes. Next, \begin{table} \begin{tabular}{l c c|c} \hline \hline GNNExp (Zhu et al., 2017) & PGExp (Zhu et al., 2017) & SubgraphX (Wang et al., 2018) & PaGE-Link (ours) \\ \hline \(O(|\mathcal{E}_{c}|T)\) & \(O(|\mathcal{E}|T)\) / \(O(|\mathcal{E}_{c}|)\) & \(\Theta(|\mathcal{V}_{c}|\hat{D}^{B_{node}-e})\) & \(O(|\mathcal{E}_{c}|+|\mathcal{E}_{c}^{k}|T)\) \\ \hline \hline \end{tabular} \end{table} Table 3. Time complexity of PaGE-Link and other methods. we randomly sample a set of items for each user, and we connect a (user, item) pair by a "buys" edge, if the user "hidden prefers" any attr the item "has". The "hidden prefers" edges correspond to an intermediate step for generating the observable "buys" edges. We remove the "hidden prefers" edges after "buys" edges are generated since we cannot observe "hidden prefers" information in reality. An example of the rationale behind this generation step is that items have certain attributes, like the item "ice cream" with the attribute "vanilla". Then given that a user likes the attribute "vanilla" as hidden information, we observe that the user buys "vanilla ice cream". The next step is to generate more 'buys" edges between randomly picked (user, item) pairs if a similar user (two users with many shared item neighbors) buys this item. The idea is like collaborative filtering, which says similar users tend to buy similar items. The final step is generating edges for prediction and their corresponding ground truth explanations, which follows the same augmentation process described above for AugCitation. For UserItemAttr, we have "has" and "buys" as base edges to construct the ground truth, and we create "likes" edges between users and items for prediction. ### Experiment Settings The GNN-based LP modelAs described in Section 3, the LP model involves a GNN encoder and a prediction head. We use RGCN (Zhu et al., 2017) as the encoder to learn node representations on heterogeneous graphs and the inner product as the prediction head. We train the model using the cross-entropy loss. On each dataset, our prediction task covers one edge type \(r\). We randomly split the observed edges of type \(r\) into train:validation:test = 7:1:2 as positive samples and draw negative samples from the unobserved edges of type \(r\). Edges of other types are used for GNN message passing but not prediction. Explainer baselinesExisting GNN explanation methods cannot be directly applied to heterogeneous LP. Thus, we extend the popular GNNExplainer (Zhu et al., 2017) and PGExplainer (Zhu et al., 2017) as our baselines. We re-implement a heterogeneous version of their mask matrix and mask predictor similar to the heterogeneous mask learning module in PaGE-Link. For these baselines, we perform mask learning using their original objectives, and we generate edge-induced subgraph explanations from their learned mask. We refer to these two adapted explainers as GNNExp-Link and PGExp-Link below. We do not compare to other search-based explainers like SubgraphX (Zhu et al., 2017) because of their high computational complexity (see Section 5.4). They work well on small graphs as in the original papers, but they are hard to scale to large and dense graphs we consider for LP. ### Evaluation Results Quantitative evaluationBoth the ground truth and the final explanation output of PaGE-Link are sets of paths. In contrast, the baseline explainers generate edge masks \(\mathcal{M}\). For a fair comparison, we take the intermediate result PaGE-Link learned, also the mask \(\mathcal{M}\), and we follow (Zhu et al., 2017) to compare explainers by their masks. Specifically for each computation graph, edges in the ground truth paths are treated as positive, and other edges are treated as negative. Then weights in \(\mathcal{M}\) are treated as the prediction scores of edges and are evaluated with the ROC-AUC metric. A high ROC-AUC score reflects that edges in ground truth are precisely captured by the mask. The results are shown in Table 4, where PaGE-Link outperforms both baseline explainers. For scalability, we showed PaGE-Link scales linearly in \(O(|\mathcal{E}_{c}^{k}|)\) in Section 5.4. Here we evaluate its scalability empirically by generating ten synthetic graphs with various sizes from 20 to 5,500 edges in \(\mathcal{G}_{c}\). The results are shown in Figure 1(b), which suggests the computation time scales linearly in the number of edges. Qualitative evaluationA critical advantage of PaGE-Link is that it generates path explanations, which can capture the connections between node pairs and enjoy better interpretability. In contrast, the top important edges found by baseline methods are often disconnected from the source, the target, or both, which makes their explanations hard for humans to interpret and investigate. We conduct case studies to visualize explanations generated by PaGE-Link on the paper recommendation task on AugCitation. Figure 5 shows a case in which the model recommends the source author "Vipin Kumar" the recommended target paper titled "Fast \begin{table} \begin{tabular}{l c c|c} \hline \hline & GNNExp-Link & PGExp-Link & PaGE-Link (ours) \\ \hline AugCitation & 0.829 & 0.586 & **0.928** \\ UserItemAttr & 0.608 & 0.578 & **0.954** \\ \hline \hline \end{tabular} \end{table} Table 4. ROC-AUC scores on learned masks. PaGE-Link outperforms baselines. Figure 4. The proposed augmented graph AugCitation and the synthetic graph UserItemAttr. and exact network trajectory similarity computation: a case-study on bicycle corridor planning". The top path explanation generated by PaGE-Link goes through the coauthor "Shashi Shekhar", which explains the recommendation as Vipin Kumar and Shashi Shekhar coauthored the paper "Correlation analysis of spatial time series datasets: a filter-and-refine approach", and Shashi Shekhar wrote the recommended paper. Given the same budget of three edges, explanations generated by baselines are less interpretable. Figure 6 shows another example with the source author "Huan Liu" and the recommended target paper titled "Using association rules to solve the cold-start problem in recommender systems". PaGE-Link generates paths going through the common fos of the recommended paper and three other papers written by Huan Liu: _p22646_, _p25160_, and _p35294_. We show the PaGE-Link explanation with the top three paths in green. We also show other unexpected fos shared by the _p22646_, _p25160_, and _p35294_ and the target paper. Note that the explanation paths all have length three, even though there are many paths with length five or longer, e.g., \((a328,p22646,f4,p25260,f4134,p5670)\). Also, the explanation paths go through the fos "Redundancy (engineering)" and "User profile" instead of generic fos like "Artificial intelligence" and "Computer science". This case demonstrates that explanation paths selected by PaGE-Link are more concise and informative. ## 7. Human Evaluation The ultimate goal of model explanation is to improve model transparency and help human decision-making. Human evaluation is thus the best way to evaluate the effectiveness of an explainer, which has been a standard evaluation approach in previous works (Granter et al., 2017; Wang et al., 2018; Wang et al., 2018). We conduct a human evaluation by randomly picking 100 predicted links from the test set of AugCitation and generate explanations for each link using GNNExp-Link, PGExp-Link, and PaGE-Link. We design a survey with single-choice questions. In each question, we show respondents the predicted link and those three explanations with both the graph structure and the node/edge type information, similarly as in Figure 5 but excluding method names. The survey is sent to people across graduate students, postdocs, engineers, research scientists, and professors, including people with and without background knowledge about GNNs. We ask respondents to "please select the best explanation of '_why the model predicts this author will like the recommended paper?_". At least three answers from different people are collected for each question. In total, 340 evaluations are collected and 78.79% of them selected explanations by PaGE-Link as the best. ## 8. Conclusion In this work, we study model transparency and accountability on graphs. We investigate a new task: GNN explanation for heterogeneous LP. We identify three challenges for the task and propose a new path-based method, i.e. PaGE-Link, that produces explanations with _interpretable connections_, is _scalable_, and handles graph _heterogeneity_. PaGE-Link explanations quantitatively improve ROC-AUC by 9 - 35% over baselines and are chosen by 78.79% responses as qualitatively more interpretable in human evaluation. ###### Acknowledgements. We thank Ziniu Hu for the helpful discussions on this work. This work is partially supported by NSF (2211557, 1937599, 2119643), NASA, SRC, Okawa Foundation Grant, Amazon Research Awards, Cisco Research Grant, Picsart Gifts, and Snapchat Gifts. Figure 5. Explanations (green arrows) by different explainers for the predicted link (\(a2367,p16200\)) (dashed red). PaGE-Link explanation explains the recommendation by co-authorship, whereas baseline explanations are less interpretable. Figure 6. Top three paths (green arrows) selected by PaGE-Link for explaining the predicted link (\(a328,p5670\)) (dashed red). The selected paths are short and do not go through a generic field of study like “Computer Science”.
2303.01724
Node-Specific Space Selection via Localized Geometric Hyperbolicity in Graph Neural Networks
Many graph neural networks have been developed to learn graph representations in either Euclidean or hyperbolic space, with all nodes' representations embedded in a single space. However, a graph can have hyperbolic and Euclidean geometries at different regions of the graph. Thus, it is sub-optimal to indifferently embed an entire graph into a single space. In this paper, we explore and analyze two notions of local hyperbolicity, describing the underlying local geometry: geometric (Gromov) and model-based, to determine the preferred space of embedding for each node. The two hyperbolicities' distributions are aligned using the Wasserstein metric such that the calculated geometric hyperbolicity guides the choice of the learned model hyperbolicity. As such our model Joint Space Graph Neural Network (JSGNN) can leverage both Euclidean and hyperbolic spaces during learning by allowing node-specific geometry space selection. We evaluate our model on both node classification and link prediction tasks and observe promising performance compared to baseline models.
See Hian Lee, Feng Ji, Wee Peng Tay
2023-03-03T06:04:42Z
http://arxiv.org/abs/2303.01724v1
# Node-Specific Space Selection via Localized Geometric Hyperbolicity in Graph Neural Networks ###### Abstract Many graph neural networks have been developed to learn graph representations in either Euclidean or hyperbolic space, with all nodes' representations embedded in a single space. However, a graph can have hyperbolic and Euclidean geometries at different regions of the graph. Thus, it is sub-optimal to indifferently embed an entire graph into a single space. In this paper, we explore and analyze two notions of local hyperbolicity, describing the underlying local geometry: geometric (Gromov) and model-based, to determine the preferred space of embedding for each node. The two hyperbolicities' distributions are aligned using the Wasserstein metric such that the calculated geometric hyperbolicity guides the choice of the learned model hyperbolicity. As such our model Joint Space Graph Neural Network (JSGNN) can leverage both Euclidean and hyperbolic spaces during learning by allowing node-specific geometry space selection. We evaluate our model on both node classification and link prediction tasks and observe promising performance compared to baseline models. Graph neural networks, hyperbolic embedding, graph representation learning, joint space learning. ## I Introduction Graph neural networks (GNNs) are neural networks that learn from graph-structured data. Many works such as Graph Convolutional Network (GCN) [1], Graph Attention Network (GAT) [2], GraphSAGE [3] and their variants operate on the Euclidean space and have been applied in many areas such as recommender systems [4, 5], chemistry [6] and financial systems [7]. Despite their remarkable accomplishments, their performances are still limited by the representation ability of Euclidean space. They are unable to achieve the best performance in situations when the data exhibit non-Euclidean characteristics such as scale-free, tree-like, or hierarchical structures [8]. As such, hyperbolic spaces have gained traction in research as they have been proven to better embed tree-like, hierarchical structures compared to the Euclidean geometry [9, 10]. Intuitively, encoding non-Euclidean structures such as trees in the Euclidean space would result in more considerable distortion since the number of nodes in a tree increases exponentially with the depth of the tree while the Euclidean space only grows polynomially [11]. In such cases, the hyperbolic geometry serves as an alternative to learning those structures with comparably smaller distortion as the hyperbolic space has the exponential growth property [8]. As such, hyperbolic versions of GNNs such as HGCN [12], HGNN [13], HGAT [14] and LGCN [15] have been proposed. Nevertheless, real-world graphs are often complex. They are neither solely made up of Euclidean nor non-Euclidean structures alone but a mixture of geometrical structures. Consider a localized version of geometric hyperbolicity, a concept from geometry group theory measuring how tree-like the underlying space is for each node in the graph (refer to Section III-A for more details). We observe a mixture of local geometric hyperbolicity values in most of the benchmark datasets we employ for our experiments as seen in Fig. 2. This implies that the graphs contain a mixture of geometries and thus, it is not ideal to embed the graphs into a single geometry space, regardless of Euclidean or hyperbolic as it inevitably leads to undesired structural inductive biases and distortions [8]. Taking a graph containing both lattice-like and tree-like structures as an example, Fig. 1c and Fig. 1f shows that 15 of the blue-colored nodes in the tree structure are calculated to have 2-hop local geometric hyperbolicity value of zero, while 12 of the purple nodes have a value of one and the other 3 purple nodes (at the center of the lattice) have a value of two (the smaller the hyperbolicity value, the more hyperbolic). This localized metric can therefore serve as an indication during learning on which of the two spaces is more suitable to embed the respective nodes. Here we address this mixture of geometry in a graph and propose Joint Space Graph Neural Network (JSGNN) that performs learning on a joint space consisting of both Euclidean and hyperbolic geometries. To achieve this, we first update all the node features in both Euclidean and hyperbolic spaces independently, giving rise to two sets of updated node features. Then, we employ exponential and logarithmic maps to bridge the two spaces and an attention mechanism is used as a form of model hyperbolicity, taking into account the underlying structure around each node and the corresponding node features. The learned model hyperbolicity is guided by geometric hyperbolicity and is used to "softly decide" the most suitable embedding space for each node and to reduce the two sets of updated features into only one set. Ideally, a node should be either hyperbolic or Euclidean and not both simultaneously, thus, we also introduce an additional loss term to achieve this non-uniform characteristic. To the best of our knowledge, the closest work to ours is Geometry Interaction Learning (GIL) [11] which exploits Euclidean and hyperbolic spaces through a dual feature interaction learning mechanism and a probability assembling module. GIL has two branches where a message-passing procedure is performed in Euclidean and hyperbolic spaces simultaneously. Dual feature interaction learning is where the node features in each of the spaces are enhanced based upon the updated features on the other space and their distance similarity. The larger the distance between the different spatial embeddings, the larger the portion of features from the other space is summed to itself as seen in Fig. 3. Meanwhile, probability assembling refers to learning node-level weights to determine which of the learned geometric embeddings is more critical. A weighted sum of the classification probabilities from the two spaces yields the final result. Our approach differs from [11] in some key aspects. Firstly, we leverage the distribution of geometric hyperbolicity to guide our model to learn to decide for each node to be either better embedded in a Euclidean or hyperbolic space instead of performing feature interaction learning. This is done by aligning the distribution of the learned model hyperbolicity and geometric hyperbolicity using the Wasserstein distance. Our motivation is that if a node can be best embedded in one of the two spaces and encoding it in another space other than the optimal one would result in comparably larger distortion. Minimal information would be present in the sub-optimal space to help "enhance" the representation in the better space. Hence, promoting feature interaction could possibly introduce more noise to the branches. The ideal situation is then to learn normalized selection weights that are non-uniform for each node so that we select for each node a single, comparably better space's output embedding. To achieve this, we introduce an additional loss term that promotes non-uniformity. Lastly, we do not require probability assembling since we only have one set of output features at the end of the selection process. ## II Background In this section, we give a brief overview of hyperbolic geometry that will be used in the paper. Readers are referred to [16] for further details. Moreover, we review GAT and its hyperbolic version. ### _Hyperbolic geometry_ A hyperbolic space is a non-Euclidean space with constant negative curvature. There are different but equivalent models to describe the same hyperbolic geometry. In this paper, we work with the Poincare ball model, in which all points are inside a ball. The hyperbolic space with constant negative curvature \(c\) is denoted by \((\mathbb{D}_{c}^{n},g_{\mathbf{x}}^{c})\). It consists of the \(n\)-dimensional hyperbolic manifold \(\mathbb{D}_{c}^{n}=\{\mathbf{x}\in\mathbb{R}^{n}:c\|\mathbf{x}\|<1\}\) with the Riemannian metric \(g_{\mathbf{x}}^{c}=(\lambda_{\mathbf{x}}^{c})^{2}g^{E}\), where \(\lambda_{\mathbf{x}}^{c}=2/(1-c\|\mathbf{x}\|^{2})\) and \(g^{E}=\mathbf{I}_{n}\) is the Euclidean metric. At each \(\mathbf{x}\in\mathbb{D}_{c}^{n}\), there is a tangent space \(\mathcal{T}_{\mathbf{x}}\mathbb{D}_{c}^{n}\), which can be viewed as the first-order approximation of the hyperbolic manifold at \(\mathbf{x}\)[9]. The tangent space is then useful to perform Euclidean operations that we are familiar with but are undefined in hyperbolic spaces. A hyperbolic space and the tangent space at a point are connected through the exponential map \(\exp_{\mathbf{x}}^{c}:\mathcal{T}_{\mathbf{x}}\mathbb{D}_{c}^{n}\to\mathbb{D}_ {c}^{n}\) and logarithmic map \(\log_{\mathbf{x}}^{c}:\mathbb{D}_{c}^{n}\to\mathcal{T}_{\mathbf{x}}\mathbb{D} _{c}^{n}\), specifically defined as follows: \[\exp_{\mathbf{x}}^{c}(\mathbf{v}) =\mathbf{x}\oplus_{c}\Big{(}\tanh\!\left(\sqrt{c}\frac{\lambda_{ \mathbf{x}}^{c}\|\mathbf{v}\|}{2}\right)\!\frac{\mathbf{v}}{\sqrt{c}\|\mathbf{ v}\|}\Big{)}, \tag{1}\] \[\log_{\mathbf{x}}^{c}(\mathbf{y}) =\frac{2}{\sqrt{c}\lambda_{\mathbf{x}}^{c}}\tanh^{-1}(\sqrt{c} \|-\mathbf{x}\oplus_{c}\mathbf{y}\|)\frac{-\mathbf{x}\oplus_{c}\mathbf{y}\,}{ \|-\mathbf{x}\oplus_{c}\mathbf{y}\|}, \tag{2}\] where \(\mathbf{x},\mathbf{y}\in\mathbb{D}_{c}^{n},\mathbf{v}\in\mathcal{T}_{\mathbf{ x}}\mathbb{D}_{c}^{n}\) and \(\oplus_{c}\) is the Mobius addition. For convenience, we write \(\mathbb{D}\) for \(\mathbb{D}_{c}^{n}\) if no confusion arises. A salient feature of hyperbolic geometry is that it is "thinner" than Euclidean geometry. Visually, more points can be squeezed in a hyperbolic subspace having the same shape as its Euclidean counterpart, due to the different metrics in the two spaces. We discuss the graph version in Section III-A below. ### _Graph attention and message passing_ Consider a graph \(G=(V,E)\), where \(V\) is the set of vertices, \(E\) is the set of edges, and each node in \(V\) is associated with Fig. 1: Example graphs. (a) Lattice-like graph. (b) A tree. (c) A combined graph containing both lattice and tree structure. (d-f) The histograms reflect the geometric hyperbolicity in the respective graphs. a node feature \(h_{v}\). Recall that GAT is a GNN that updates node representations using message passing by updating edge weights concurrently. Specifically, for one layer of GAT [2], the node features are updated as follows: \[h_{v}^{{}^{\prime}} =\sigma\Big{(}\sum_{j\in N(v)}\alpha_{vj}\textbf{W}h_{j}\Big{)}, \tag{3}\] \[\alpha_{vj} =\frac{\exp(e_{vj})}{\sum_{k\in N(v)}\exp(e_{rk})},\] (4) \[e_{vj} =\mathrm{LeakyReLU}(\textbf{a}^{\prime}[\textbf{W}h_{v}\parallel \textbf{W}h_{j}]), \tag{5}\] where \(\parallel\) denotes the concatenation operation, \(\sigma\) denotes an activation function, **a** represents the learnable attention vector, **W** is the weight matrix for a linear transformation and \(\alpha\) denotes the normalized attention scores. This model has been proven to be successful in many graph-related machine learning tasks. ### _Hyperbolic attention model_ To derive a hyperbolic version of GAT, we adopt the following strategy. We perform feature aggregation in the tangent spaces of points in the hyperbolic space. Features are mapped between hyperbolic space and tangent spaces using the pair of exponential and logarithmic functions: \(\exp_{\textbf{x}}^{c}\) and \(\log_{\textbf{x}}^{c}\). With this, we denote Euclidean features as \(h_{\mathbb{R}}\) and hyperbolic features as \(h_{\mathbb{D}}\). Then one layer of message propagation in the hyperbolic GAT is as follows [11]: \[h_{v,\mathbb{D}}^{{}^{\prime}}=\sigma\Big{(}\sum_{j\in N(v)} \alpha_{vj}\log_{\textbf{o}}^{c}(\textbf{W}\otimes_{c}h_{j,\mathbb{D}}\oplus_ {c}\textbf{b})\Big{)}, \tag{6}\] \[e_{vj}=\mathrm{LeakyReLU}\Big{(}\textbf{a}^{\prime}\Big{[}\hat {h}_{v}\parallel\hat{h}_{j}\Big{]}\times d_{\mathbb{D}}(h_{v,\mathbb{D}},h_{j, \mathbb{D}})\Big{)},\] (7) \[d_{\mathbb{D}}(h_{v,\mathbb{D}},h_{j,\mathbb{D}})=\frac{2}{\sqrt {c}}\tanh^{-1}(\sqrt{c}\|-h_{v,\mathbb{D}}\oplus_{c}h_{j,\mathbb{D}}\|),\] (8) \[\alpha_{vj}=\mathrm{softmax}_{j}(e_{vj}), \tag{9}\] where \(d_{\mathbb{D}}\) is the normalized hyperbolic distance, \(\hat{h}_{j}=\log_{\textbf{o}}^{c}(\textbf{W}\otimes_{c}h_{j,\mathbb{D}})\), while \(\otimes_{c}\) and \(\oplus_{c}\) represent the Mobius matrix multiplication and addition, respectively. ## III Joint Space Learning In this section, we propose our joint space learning model. The model relies on comparing two different notions of hyperbolicity: geometric hyperbolicity and model hyperbolicity. We start by introducing the former, which also serves as the motivation for the design of our GNN model. ### _Local geometry and geometric hyperbolicity_ Gromov's \(\delta\)-hyperbolicity is a mathematical notion from geometry group theory to measure how tree-like a metric space is in terms of metric or distance structure [17, 12]. The precise definition is given as follows. **Definition 1** (Gromov 4-point \(\delta\)-hyperbolicity [18] p.410).: _For a metric space \(X\) with metric \(d(\cdot,\cdot)\), it is \(\delta\)-hyperbolic, where \(\delta\geq 0\) if the four-point condition holds:_ \[d(x,y)+d(z,t)\leq \tag{10}\] \[\max\{d(x,z)+d(y,t),d(z,y)+d(x,t)\}+2\delta,\] _for any \(x,y,z,t\in X\). \(X\) is hyperbolic if it is \(\delta\)-hyperbolic for some \(\delta\geq 0\)._ This condition of \(\delta\)-hyperbolicity is equivalent to the Gromov thin triangle condition. For example, any tree is (0-)hyperbolic, and \(\mathbb{R}^{n}\), where \(n\geq 2\) is not hyperbolic. However, if \(X\) is a compact metric space, then \(X\) is always \(\delta\)-hyperbolic for some \(\delta\) large enough such as \(\delta=\mathrm{diameter}(X)\). Therefore, it is insufficient to just label \(X\) as hyperbolic or not. We want to quantify hyperbolicity such that a space with smaller hyperbolicity resembles more of a tree. Inspired by the four-point condition, we define the \(\infty\)-version and the \(1\)-version of hyperbolicity as follows. Fig. 2: Distributions of geometric hyperbolicity for all datasets, obtained by computing \(\delta_{G_{\alpha},\infty}\) on each nodes’ 2-hop subgraph. **Definition 2**.: _For a compact metric space \(X\) and \(x,y,z,t\in X\), denote \(\inf_{\delta\geq 0}\{(10)\) holds for \(x,y,z,t\}\) by \(\tau_{X}(x,y,z,t)\). Define_ \[\delta_{X,\infty}=\sup_{x,y,z,t\in X}\tau_{X}(x,y,z,t),\] \[\delta_{X,1}=\mathbb{E}_{x,y,z,t\sim\mathrm{Unif}(X^{4})}[\tau_{ X}(x,y,z,t)],\] _where \(\mathrm{Unif}\) represents the uniform distribution._ In order for these invariants to be useful for graphs, we require them to be almost identical for graphs with similar structures. We shall see that this is indeed the case. Before stating the result, we need a few more concepts. Let \(\mathcal{G}\) be the space of weighted, undirected simple graphs. Though for most experiments, the given graphs are unweighted. However, aggregation mechanisms such as attention essentially generate weights for the edges. Therefore, for both theoretical and practical reasons, it makes sense to expand the graph domain to include weighted graphs. For each \(G=(V,E)\in\mathcal{G}\), it has a canonical path metric \(d_{G}\), and \(d_{G}\) makes \(G\) into a metric space including non-vertex points on the edges. For \(\epsilon>0\), there is the subspace \(\mathcal{G}_{\epsilon}\) of \(\mathcal{G}\) consisting of graphs whose edge weights are greater than \(\epsilon\). On the other hand, there is a metric on the space \(\mathcal{G}\) and \(\mathcal{G}_{\epsilon}\), called the Gromov-Hausdorff metric ([18] p.72). To define it, we first introduce the Hausdorff distance. Let \(X\) and \(Y\) be two subsets of a metric space \((M,d)\). Then the Hausdorff distance \(d_{H}(X,Y)\) between \(X\) and \(Y\) is \[d_{H}(X,Y)=\max\{\sup_{x\in X}d(x,Y),\sup_{y\in Y}d(X,y)\},\] where \(d(x,Y)=\inf_{y\in Y}d(x,y)\), \(d(X,y)=\inf_{x\in X}d(x,y)\). The Hausdorff distance measures in the worst case, how far away a point in \(X\) is away from \(Y\) and vice versa. In general, we want to also compare spaces that do not a priori belong to a common ambient space. For this, if \(X,Y\) are two compact metric spaces, then their Gromov-Hausdorff distance \(d_{GH}(X,Y)\) is defined as the infimum of all numbers \(d_{H}(f(X),g(Y))\) for all metric spaces \(M\) and all isometric embeddings \(f:X\to M,g:Y\to M\). Intuitively, the Gromov-Hausdorff distance measures how far \(X\) and \(Y\) are from being isometric. The following is proved in the Appendix. **Proposition 1**.: _Suppose \(\mathcal{G}\) and its subspaces have the Gromov-Hausdorff metric. Then \(\delta_{G,\infty}\) is Lipschitz continuous w.r.t. \(G\in\mathcal{G}\) and \(\delta_{G,1}\) is continuous w.r.t. \(G\in\mathcal{G}_{\epsilon}\) for any \(\epsilon>0\)._ Consider a graph \(G\). We fix either \(\delta_{G,\infty}\) or \(\delta_{G,1}\) as a measure of hyperbolicity, and apply to each local neighborhood of \(G\). To be more precise, it is studied [19, 20] that many popular GNN models have a shallow structure. It is customary to have a \(2\)-layer network possibly due to oversmoothing [21, 22, 23] and oversquashing [24] phenomena. In such models, each node only aggregates information in a small neighborhood. Therefore, if we fix a small \(k\) and let \(G_{v}\) be the subgraph of the \(k\)-hop neighborhood of \(v\in V\), then it is more appropriate to study the hyperbolicity \(\delta_{v}\), either \(\delta_{G_{v},\infty}\) or \(\delta_{G_{v},1}\), of \(G_{v}\). For our experiments, the former is utilized. We call \(\delta_{v}\) the _geometric hyperbolicity_ at node \(v\). The collection \(\Delta_{V}=\{\delta_{v}:v\in V\}\) allows us to obtain an empirical distribution \(\mu_{G}\) of geometric hyperbolicity on the sample space \(\mathbb{R}_{\geq 0}\). For instance, we can build histograms to acquire the distributions as observed in Fig. 2. We see, for example, for Cora, a substantial number of nodes have small (local) hyperbolicity, in contrast with many works that claim Cora to be relatively Euclidean due to its high global hyperbolicity value [12, 25]. On the other hand, Airport is argued to be globally hyperbolic, but a large proportion of nodes has large local hyperbolicity. However, this is not a contradiction as we are considering the local structures of the graph. We call \(\mu_{G}\) the _distribution of geometric hyperbolicity_. It depends only on \(G\) and \(k\). ### _Space selection and model hyperbolicity_ In this section, we describe the backbone of our model and introduce the notion of model hyperbolicity. Our model consists of two branches, one using Euclidean geometry and the other using hyperbolic geometry. For the Euclidean part, we use GAT for message propagation, while for the hyperbolic part, we employ HGAT in Section II-C. After the respective message propagation, we would have two sets of updated node embeddings, the Euclidean embedding \(Z_{\mathbb{R}}\) and the hyperbolic embedding \(Z_{\mathbb{D}}\). The two sets of embeddings are combined into a single embedding \(Z=\{z_{v},v\in V\}\) through an attention mechanism that serves as a space selection procedure. The attention mechanism is performed in a Euclidean space. Thus, the hyperbolic embeddings are first mapped into the tangent space using the logarithmic map. Mathematically, the normalized attention score indicating whether a node should be embedded in the hyperbolic space \(\beta_{v,\mathbb{D}}\) or Euclidean space \(\beta_{v,\mathbb{R}}\) is as follows: \[w_{v,\mathbb{R}} =\mathbf{q}^{\intercal}\tanh(\mathbf{M}z_{v,\mathbb{R}}+\mathbf{ b}), \tag{11}\] \[w_{v,\mathbb{D}} =\mathbf{q}^{\intercal}\tanh(\mathbf{M}\log_{\mathbf{v}}^{ \mathrm{c}}(z_{v,\mathbb{D}})+\mathbf{b}),\] (12) \[\beta_{v,\mathbb{R}} =\frac{\exp(w_{v,\mathbb{R}})}{\exp(w_{v,\mathbb{R}})+\exp(w_{v, \mathbb{D}})},\] (13) \[\beta_{v,\mathbb{D}} =\frac{\exp(w_{v,\mathbb{D}})}{\exp(w_{v,\mathbb{R}})+\exp(w_{v, \mathbb{D}})}, \tag{14}\] where \(\mathbf{q}\) refers to the learnable space selection attention vector, \(\mathbf{M}\) is a learnable weight matrix, \(\mathbf{b}\) denotes a learnable bias and \(\beta_{v,\mathbb{D}}+\beta_{v,\mathbb{R}}=1\), for all \(v\in V\). The two sets of space-specific node embeddings can then be combined via a convex combination using the learned weights as follows: \[z_{v}=\beta_{v,\mathbb{R}}z_{v,\mathbb{R}}+\beta_{v,\mathbb{D}}\log_{\mathbf{v }}^{\mathrm{c}}(z_{v,\mathbb{D}}),\forall\,v\in V. \tag{15}\] This gives one layer of the model architecture of JSGNN, as illustrated in Fig. 3. The parameter \(\beta_{v,\mathbb{R}},v\in V\) controls whether the combined output, consisting of both hyperbolic and Euclidean components, should rely more on the hyperbolic components or not. We call \(\beta_{v,\mathbb{R}}\) the _model hyperbolicity_ at the node \(v\). The notion of model hyperbolicity depends on node features as well as the explicit GNN model. Similar to geometric hyperbolicity, the collection \(\Gamma_{G}=\{\beta_{v,\mathbb{R}}:\,v\in V\}\) gives rise to an empirical distribution \(\nu_{G}\) on \([0,1]\). We call \(\nu_{G}\) the _distribution of model hyperbolicity_. To motivate the next subsection, from (15), we notice that the output depends smoothly on \(\beta_{v,\mathbb{R}}\). If we wish to have a similar output for nodes with similar neighborhood structures and features, we want their selection weights to have similar values. On the other hand, we have seen (cf. Proposition 1) that geometric hyperbolicities, which can be computed given \(G\), are similar for nodes with similar neighborhoods. It suggests that we may use geometric hyperbolicities to "guide" the choice of model hyperbolicities. ### _Model hyperbolicity vs. geometric hyperbolicity_ We have introduced geometric and model hyperbolicities in the previous subsections. In this subsection, we explore the interconnections between these two notions. Let \(\Theta\) be the parameters of a proposed GNN model. We assume that the model has the pipeline shown in Fig. 4. Given node features \(\{h_{v},v\in V\}\) and model parameters \(\Theta\), the model generates (embedding) features \(\{z_{v},v\in V\}\) and selection weights or model hyperbolicity \(\{\beta_{v,\mathbb{R}},v\in V\}\) in the intermediate stage. For each \(v\in V\), there is a combination function \(\phi_{v}\) such that the final output \(\{\hat{y}_{v},v\in V\}\) satisfies \(\hat{y}_{v}=\phi_{v}(z_{v},\beta_{v})\). In principle, we want to compare \(\{\beta_{v,\mathbb{R}},v\in V\}\) and \(\{\delta_{v},v\in V\}\) so that the geometric hyperbolicity guides the choice of model hyperbolicity. However, comparing pairwise \(\beta_{v}\) and \(\delta_{v}\) for each \(v\in V\) may lead to overfitting. An alternative is to compare their respective distributions \(\nu_{G}\) and \(\mu_{G}\), or even coarser statistics (e.g., mean) of \(\nu_{G}\) and \(\mu_{G}\) (cf. Fig. 5). The latter may lead to underfitting. We perform an ablation study on the different comparison methods in Section IV-E. We advocate choosing the middle ground by comparing the distributions \(\mu_{G}\) and \(\nu_{G}\). The former can be computed readily as long as the ambient graph \(G\) is given, while the latter is a part of the model that plays a crucial role in feature aggregation at each node. Therefore, \(\mu_{G}\) can be pre-determined but not \(\nu_{G}\). We propose to use the known \(\mu_{G}\) to constrain \(\nu_{G}\) and thus the model parameters \(\Theta\). A widely used comparison tool is the Wasserstein metric. **Definition 3** (Wasserstein distance).: _Given \(p\geq 1\), the \(p\)-Wasserstein distance metric [26] measures the difference between two different probability distributions [27]. Let Fig. 4: The model pipeline is shown in the (blue) dashed box, while the geometric hyperbolicity can be computed independently of the model. Fig. 5: Different ways of comparing geometric and model hyperbolicities. Fig. 3: Comparison between JSGNN and GIL [11] in leveraging Euclidean and hyperbolic spaces. (a) Soft space selection mechanism of JSGNN where trainable selection weights \(\beta_{v,\mathbb{R}},\beta_{v,\mathbb{D}}\) are non-uniform, effectively selecting the better of the two spaces considered. (b) Feature interaction mechanism of GIL where \(\zeta,\zeta^{\prime}\in\mathbb{R}\) are trainable weights and \(d_{\mathbb{D}},d_{\mathbb{R}}\) are the hyperbolic distance (cf. (8)) and Euclidean distance respectively. The node embeddings of both spaces in GIL are adjusted based on distance, potentially introducing more noise to the branches as there is minimal information in the sub-optimal space to “enhance” the representation in the better space. \(\Pi(\nu_{G},\mu_{G})\) be the set of all joint distributions for random variables \(x\) and \(y\) where \(x\thicksim\nu_{G}\) and \(y\thicksim\mu_{G}\). Then the \(p\)-Wasserstein distance between \(\mu_{G}\) and \(\nu_{G}\) is as follows:_ \[W_{p}(\nu_{G},\mu_{G})=\left\{\inf_{\gamma\in\Pi(\nu_{G},\mu_{G})}\mathbb{E}_{( x,y)\thicksim\gamma}\|x-y\|^{p}\right\}^{1/p}. \tag{16}\] To compute the Wasserstein distance exactly is costly given that the solution of an optimal transport problem is required [28, 29]. However, for one-dimensional distributions, the \(p\)-Wasserstein distance can be computed by ordering the _samples_ from the two distributions and then computing the average \(p\)-distance between the ordered samples [28, 30]. In ideal circumstances, considering the distributions do not lose much information. We first notice that for both \(\beta_{v,\mathbb{R}}\) and \(\delta_{v}\), a smaller value means more hyperbolic in an appropriate sense. Suppose \(\beta_{v,\mathbb{R}}\) is increasing w.r.t. \(\delta_{v}\), i.e., \(\delta_{v}\leq\delta_{u}\) implies that \(\beta_{v,\mathbb{R}}\leq\beta_{u,\mathbb{R}}\). Then, \(W_{2}(\mu_{G},\nu_{G})=\sqrt{\frac{1}{|\mathcal{V}|}\sum_{v\in V}|\beta_{v, \mathbb{R}}-\delta_{v}|^{2}}\). ### _Non-uniformity of selection weights_ A node is considered to be more suitable to be embedded in the hyperbolic space when \(\beta_{v,\mathbb{D}}>\beta_{v,\mathbb{R}}\). Meanwhile when \(\beta_{v,\mathbb{D}}\leq\beta_{v,\mathbb{R}}\), the node is considered to be Euclidean. Nevertheless, to align with our motivation that each node can be better embedded in one of the two spaces and the less suitable space would result in distortion in representation, we require JSGNN to learn non-uniform attention weights, meaning that each pair of attention weights \((\beta_{v,\mathbb{D}},\beta_{v,\mathbb{R}})\) should significantly deviate from the uniform distribution. This is because soft selection without a non-uniformity constraint may result in the assignment of nodes to be partially Euclidean and partially hyperbolic with \(\beta_{v,\mathbb{R}}\approx\beta_{v,\mathbb{D}}\approx 0.5\). Hence, we include an additional component to the standard loss function encouraging non-uniform learned weights as follows: \[L_{\text{nu}}=-\frac{1}{|V|}\sum_{v\in V}\bigl{(}\beta_{v,\mathbb{R}}^{2}+ \beta_{v,\mathbb{D}}^{2}\bigr{)}. \tag{17}\] Since \(-1\leq-(\beta_{v,\mathbb{R}}^{2}+\beta_{v,\mathbb{D}}^{2})\leq-0.5\) and \(\beta_{v,\mathbb{R}}+\beta_{v,\mathbb{D}}=1\), minimizing the term would favor non-uniform attention weights for each node. In summary, we may combine hyperbolicity matching discussed in Section III-C and the non-uniformity loss to form the loss function to optimize JSGNN. \[L_{\text{overall}}=L_{\text{task}}+\omega_{\text{nu}}L_{\text{nu}}+\omega_{ \text{was}}W_{2}(\nu_{G},\mu_{G}), \tag{18}\] where \(L_{\text{task}}\) is the task-specific loss, while \(\omega_{\text{nu}}\) and \(\omega_{\text{was}}\) are balancing factors. For the node classification task, \(L_{\text{task}}\) refers to the cross-entropy loss over all labeled nodes while for link prediction, it refers to the cross-entropy loss with negative sampling. This completes the description of the JSGNN model. We speculate that the non-uniform component \(L_{\text{nu}}\) should push the model hyperboliciities towards the two extremes \(0\) and \(1\). On the other hand, as we have seen in Section III-C, to compute \(W_{2}(\nu_{G},\mu_{G})\), we need to order \((\delta_{v})_{v\in V}\), \((\beta_{v,\mathbb{R}})_{v\in V}\) respectively, and compute their pairwise differences. Therefore, \(W_{2}(\nu_{G},\mu_{G})\) aligns the shapes of \(\nu_{G}\) and \(\mu_{G}\). ## IV Experiments In this section, we evaluate JSGNN on node classification (NC) and link prediction (LP) tasks against seven baselines. ### _Datasets_ A total of seven benchmark datasets are employed for both NC and LP. Specifically, three citation datasets: Cora, Citeseer, Pubmed; a flight network: Airport; a disease propagation tree: Disease; an Amazon co-purchase graph dataset: Photo; and a coauthor dataset: CS. The statistics of the datasets are as shown in Table I. ### _Baselines and settings_ We compare against three Euclidean methods GCN [1], GraphSAGE [3] and GAT [2] and four hyperbolic models HGCN [12], HGNN [13], HGAT [14] and LGCN [15]. We also consider GIL [11], which similar to JSGNN, leverages both hyperbolic and Euclidean spaces. For all models, the hidden units are set to 16. We set the early stopping patience to 100 epochs with a maximum limit of 1000 epochs. The hyperparameter settings for the baselines are the same as [11] if given. The only difference is that the hyperparameter _h-drop_ for GIL in [11] (which determines the dropout to the weight associated with the hyperbolic space embedding) is set to 0 for all datasets as setting a large value essentially explicitly chooses one single space. Else, the hyperparameters are chosen to yield the best performance. For JSGNN, we perform a grid search on the following search spaces: Learning rate: [0.01, 0.005]; Dropout probability: [0.0, 0.1, 0.5, 0.6]; Number of layers: [1, 2, 3]; \(\omega_{\text{nu}}\) and \(\omega_{\text{was}}\): [1.0, 0.5, 0.2, 0.1, 0.01, 0.005]; **q** (cf. (11)): [16, 32, 64]. The Wasserstein-\(2\) distance is employed in all variants of JSGNN. ### _Node classification_ For the node classification task, each of the nodes in a dataset belongs to one of the \(C\) classes in the dataset. With the final set of node representations, we aim to predict the labels of nodes that are in the testing set. To test the performance of each model under both semi-supervised and fully-supervised settings, two data splits are used in the node classification task for the Cora, Citeseer and Pubmed datasets. In the first split, we followed the standard split for semi-supervised settings used in [1, 2, 3, 11, 22, [31, 32, 33, 34]. The train set consists of 20 train examples per class while the validation set and test set consist of 500 samples and 1,000 samples, respectively.1 Meanwhile, in the second split, all labels are utilized and the percentages of training, validation, and test sets are set as 60/20/20%. For the Photo and CS datasets, the labeled nodes are also split into three sets where 60% of the nodes made up the training set, and the rest of the nodes were divided equally to form the validation and test sets. Airport and Disease datasets were split in similar settings as [11]. Footnote 1: Note that the top results on [https://paperswithcode.com/sota/node-classification-on-cora](https://paperswithcode.com/sota/node-classification-on-cora) used different data splits (either semi-supervised settings with a larger number of training samples or fully-supervised settings such as the 60/20/20% split) which give much higher accuracies In Table II and Table III, the mean accuracy with standard deviation is reported for node classification, except for the case of Airport and Disease datasets where the mean F1 score is reported. Our empirical results demonstrate that JSGNN frequently outperforms the baselines, especially HGAT and GAT which are the building blocks of JSGNN. This shows the superiority of using both Euclidean and hyperbolic spaces. Results also show that JGNN frequently performs better than GIL, indicating that our method of incorporating two spaces for graph learning is potentially more effective. We also observe that Euclidean models such as GCN, GAT, and GraphSAGE perform better than hyperbolic models in general on the Cora, Citeseer, and Pubmed datasets for both splits. Meanwhile, hyperbolic models achieve better results on the CS, Photo, Airport, and Disease datasets. This means that Euclidean features are more significant for representing Cora, Citeseer and Pubmed datasets while hyperbolic features are more significant for the others. Nevertheless, JSGNN is able to perform relatively well across all datasets. We note that JSGNN exceeds the performance of single-space baselines on all datasets except for Disease. This can be explained by the fact that Disease consists of a perfect tree and thus, does not exhibit different hyperbolicities in the graph. We also particularly note that the difference in results between single-space models using only the Euclidean embedding space and hyperbolic models is not significant. This means that many of the node labels can be potentially predicted even without the best representation from the right space. This might be the reason why the gain in performance for the node classification task is not exceptional from embedding nodes in the better space. Nevertheless, we still see improvements in predictions for cases where there is a mixture of local hyperbolicities. Moreover, embedding nodes in a more suitable space can benefit other tasks that require more accurate representations such as link prediction. ### _Link prediction_ We employ the Fermi-Dirac decoder with a distance function to model the probability of an edge based on our final output embedding, similar to [11, 12, 35]. The probability that an edge exists is given by \(\mathbb{P}(e_{vj}\in E\,|\,\Theta)=(e^{(d(x_{i},x_{j})-r)/t}+1)^{-1}\) where \(r,t>0\) are hyperparameters and \(d\) is the distance function. The edges of the datasets are randomly split into 85/5/10% for training, validation, and testing. The average ROC AUC for link prediction is recorded in Table IV. We observe that JSGNN performs better than the baselines in most cases. For the link prediction task, we notice that hyperbolic models consistently outperform Euclidean models by a significant margin. In such a situation, predicting the existence of edges seems to benefit from dual space models, i.e., GIL and JSGNN, potentially benefiting from better representations with reduced distortions. ### _Ablation study_ We conduct an ablation study on the node classification task by introducing three variants of JSGNN to validate the effectiveness of the different components introduced: * Without the non-uniformity constraint (w/o NU): This does not enforce the model to learn non-uniform selection weights. * Without the Wasserstein metric (w/o \(W_{2}\)): The learning of model hyperbolicity is not guided by geometric hyperbolicity. * Without the non-uniformity loss and Wasserstein distance (w/o NU & \(W_{2}\)): Only guided by the cross entropy loss, i.e., \(\omega_{\text{nn}}=0,\omega_{\text{wns}}=0\) (cf. (18)). Table V summarizes the results of our study, from which we observe that all variants of JSGNN with some components discarded perform worse than the full model. Moreover, JSGNN without \(W_{2}\) always achieves better results than JSGNN without NU and \(W_{2}\), signifying the importance of selecting the better of the two spaces instead of combining the features with relatively uniform weights. Similarly, JSGNN without NU performs better than JSGNN without NU and \(W_{2}\) in most cases, suggesting that incorporating geometric hyperbolicity through distribution alignment does help to improve the model. To further analyze our model, we present a study regarding our method of incorporating the guidance of geometric hyperbolicity through distribution alignment. The result is as seen in Table VI. We test and analyze empirically different variants of our model based on the different comparisons shown in Fig. 5. Pairwise match indicates minimizing the mean squared error between elements of \(\Gamma_{G}\) and \(\Delta_{V}\) (without sorting) while mean match minimizes the squared loss between the means of \(\Gamma_{G}\) and \(\Delta_{V}\). We observe that comparing the distributions of \(\nu_{G}\) and \(\mu_{G}\) consistently outperforms comparing their mean, demonstrating the insufficiency of utilising coarse statistics for supervision. Secondly, pairwise matching gave better results than mean matching, though still lower than distribution matching, suggesting the importance of fine-scale information yet, a need to avoid potential overfitting. ### _Analysis of hyperbolicities_ We have speculated the effects of different components of our proposed model at the end of Section III-D. To verify that our model can learn model hyperbolicity that is non-uniform and similar in distribution as geometric hyperbolicity, we analyze the learned model hyperboliciities \((\beta_{v,\mathbb{R}})_{v\in V}\) of JSGNN and JSGNN w/o NU & \(W_{2}\) for the node classification task. Specifically, we extract the learned values from the first two layers of JSGNN and its variant for ten separate runs. The learned values from the first two layers were then averaged before determining \(W_{2}(\nu_{G},\mathrm{Unif})\) and \(W_{2}(\nu_{G},\mu_{G})\). In Fig. 6, it can be inferred that JSGNN's learned model hyperbolicity is always less uniform than that of JSGNN w/o NU & \(W_{2}\) given JSGNN's larger \(W_{2}(\nu_{G},\mathrm{Unif})\) score, demonstrating a divergence from uniform distribution. Meanwhile, for most cases, JSGNN's \(W_{2}(\nu_{G},\mu_{G})\) is smaller than that of JSGNN w/o NU & \(W_{2}\), suggesting that the shape between \(\nu_{G}\) and \(\mu_{G}\) of JSGNN is relatively more similar. At times, JSGNN's \(W_{2}(\nu_{G},\mu_{G})\) is larger than JSGNN w/o NU & \(W_{2}\), suggesting a tradeoff between NU and \(W_{2}\) as we choose the optimal combination for the model's best performance. ## V Conclusion In this paper, we have explored the learning of GNNs in a joint space setting given that different regions of a graph can have different geometrical characteristics. In these situations, it would be beneficial to embed different regions of the graph in different spaces that are better suited for their underlying structures, to reduce the distortions incurred while learning node representations. Our method JSGNN utilizes a soft attention mechanism with non-uniformity constraint and distribution alignment between model and geometric hyperboliciities to select the best space-specific feature for each node. This indirectly finds the space that is best suited for each node. Experimental results of node classification and link prediction demonstrate the effectiveness of JSGNN against various baselines. In future work, we aim to further improve our model with an adaptive mechanism to determine the appropriate, node-level specific neighborhood to account for each node's hyperbolicity. ## Acknowledgments The first author is supported by Shopee Singapore Private Limited under the Economic Development Board Industrial Postgraduate Programme (EDB IPP). The programme is a collaboration between Shopee and Nanyang Technological University, Singapore. The last two authors are supported by the Singapore Ministry of Education Academic Research Fund Tier 2 grant MOE-T2EP20220-0002, and the National Research Foundation, Singapore and Infocomm Media Development Authority under its Future Communications Research and Development Programme. ## Proof of Proposition 1 Proof.: We first consider \(\delta_{G,\infty}\). For two graphs \(G_{1}=(V_{1},E_{1})\) and \(G_{2}=(V_{2},E_{2})\), let \(f_{1}:G_{1}\to M\), \(f_{2}:G_{2}\to M\) be isometric embeddings into a metric space \((M,d)\) such that \(d_{GH}(G_{1},G_{2})=d_{H}(f_{1}(G_{1}),f_{2}(G_{2}))\). Denote \(d_{GH}(G_{1},G_{2})\) by \(\eta\). For \(x,y,z,t\) in \(G_{1}\), there are \(x^{\prime},y^{\prime},z^{\prime},t^{\prime}\in G_{2}\) such that \(d(f_{1}(x),f_{2}(x^{\prime}))\), \(d(f_{1}(y),f_{2}(y^{\prime}))\), \(d(f_{1}(z),f_{2}(z^{\prime}))\), \(d(f_{1}(t),f_{2}(t^{\prime}))\) are all bounded by \(\eta\). We now estimate: \[\begin{split}& d_{G_{1}}(x,y)+d_{G_{1}}(z,t)=d(f_{1}(x),f_{1}(y))+d(f_ {1}(z),f_{1}(t))\\ &\leq d(f_{2}(x^{\prime}),f_{2}(y^{\prime}))+d(f_{2}(z^{\prime} ),f_{2}(t^{\prime}))+4\eta\\ &=d_{G_{2}}(x^{\prime},y^{\prime})+d_{G_{1}}(z^{\prime},t^{ \prime})+4\eta\\ &\leq\max\{d_{G_{2}}(x^{\prime},z^{\prime})+d_{G_{2}}(y^{\prime},t^{\prime}),d_{G_{2}}(z^{\prime},y^{\prime})+d_{G_{2}}(x^{\prime},t^{\prime} )\}\\ &\qquad+2\delta_{G_{2},\infty}+4\eta\\ &\leq\max\{d(f_{1}(x),f_{1}(z))+d(f_{1}(y),f_{1}(t)),\\ & d(f_{1}(z),f_{1}(y))+d(f_{1}(x),f_{1}(t))\}\\ &\qquad+2\delta_{G_{2},\infty}+8\eta\\ &=\max\{d_{G_{1}}(x,z)+d_{G_{1}}(y,t),d_{G_{1}}(z,y)+d_{G_{1}}(x,t)\}\\ &\qquad+2\delta_{G_{2},\infty}+8\eta.\end{split} \tag{19}\] Therefore, \(\delta_{G_{1},\infty}\leq\delta_{G_{2},\infty}+4\eta\). By the same argument swapping the role of \(G_{1}\) and \(G_{2}\), we have \(\delta_{G_{2},\infty}\leq\delta_{G_{1},\infty}+4\eta\). Therefore \(|\delta_{G_{1},\infty}-\delta_{G_{2},\infty}|\leq 4\eta\) and \(\delta_{G,\infty}\) is Lipschitz continuous w.r.t. \(G\). The proof of the continuity of \(\delta_{G,1}\) is more involved. Consider \(G_{1}\) and \(G_{2}\) in \(\mathcal{G}_{\epsilon}\). Let \(f_{1},f_{2},(M,d),\eta\) be as earlier and assume \(\eta\ll\epsilon\), for example, \(\eta=\alpha\epsilon\) for \(\alpha\) is smaller than all the numerical constants in the rest of the proof. We adopt the following convention: for any non-vertex point of a graph, its degree is \(2\). By subdividing the edges of \(G_{1}\) and \(G_{2}\) if necessary, we may assume that the length of each edge \(e\) in \(E_{1}\) or \(E_{2}\) satisfies \(\epsilon/2\leq e<\epsilon\). As a consequence, for \((u,v)\) in \(E_{1}\) (resp. \(E_{2}\)), \(d_{G_{1}}(u,v)\) (resp. \(d_{G_{2}}(u,v)\)) is the same as the Fig. 6: Analysis of hyperbolicities on different datasets. (a) \(W_{2}(\nu_{G},\mathrm{Unif})\). (b) \(W_{2}(\nu_{G},\mu_{G})\). length of \((u,v)\). We define a map \(\phi:G_{1}\to G_{2}\) as follows. For \(v\in G_{1}\), there is a \(v^{\prime}\) in \(G_{2}\) such that \(d_{GH}(f_{1}(v),f_{2}(v^{\prime}))\leq\eta\). Then we set \(\phi(v)=v^{\prime}\). The map \(\phi\) is injective on the vertex set \(V_{1}\). Indeed, for \(u\neq v\in V_{1}\), \(d_{G_{1}}(u,.,v)\geq\epsilon/2\) and hence \(d_{G_{2}}(\phi(u),\phi(v))\geq\epsilon/2-2\eta>0\). The strategy is to modify \(\phi\) by a small perturbation such that the resulting function \(\psi:G_{1}\to G_{2}\) is a homeomorphism that is almost an isometry. For \(v\in V_{1}\), let \(N_{v}\) be the \(5\eta\) neighborhood of \(v\). It is a star graph and its number of branches is the same as the degree of \(v\), say \(k\). Let \(v_{1},\ldots,v_{k}\) be the endpoints of \(N_{v}\). The convex hull (of shortest paths) \(C_{v}\) of \(\{\phi(v_{1}),\ldots,\phi(v_{k})\}\) in \(G_{2}\) is also a star graph. This is because \(C_{v}\) is contained in the \(7\eta\) neighborhood of \(\phi(v)\) and it contains at most \(1\) vertex in \(V_{2}\). We claim that \(C_{v}\) has the same number of branches as \(N_{v}\). First of all, \(C_{v}\) cannot have fewer branches. For otherwise, there is a \(\phi(v_{i})\) in the path connecting \(\phi(v)\) and \(\phi(v_{j})\) for some \(j\neq i\). Hence, \[d_{G_{2}}(\phi(v_{i}),\phi(v_{j}))\leq d_{G_{2}}(\phi(v_{i}), \phi(v))\leq 7\eta\] \[<10\eta-2\eta=d_{G_{1}}(v_{i},v_{j})-2\eta.\] This is a contradiction with the property of \(\phi\). It cannot have more branches than \(k\) as it is the convex hull of at most \(k\) points. We next consider different cases for \(k\). For \(k\neq 2\), as \(C_{v}\) is a star graph, it has a unique node \(v^{\prime}\) with degree \(k\) (in \(C_{v}\)), and \(d_{G_{1}}(v^{\prime},\phi(v_{j}))>0,1\leq j\leq j\). We claim that \(v^{\prime}\) has degree exact \(k\) in \(G_{2}\). Suppose on the contrary, its degree in \(G_{2}\) is larger than \(k\). Then there is a branch not contained in \(C_{v}\). Let \(w^{\prime}\) be a node on the new branch such that \(6\eta\leq d_{G_{2}}(w^{\prime},\phi(v))\leq 7\eta\). Moreover, there is a node \(w\) in \(N_{v}\) such that \(4\eta\leq d_{G_{1}}(w,v)\leq 9\eta\) and \(w^{\prime}=\phi(w)\). Moreover, \(w\) is on the branch containing \(v_{j}\) for some \(j\), and hence \(d_{G_{1}}(w,v_{j})\leq 4\eta\). Therefore, \[d_{G_{1}}(w,v_{j})\leq 6\eta-2\eta\] \[<d_{G_{2}}(v^{\prime},\phi(v_{j}))+d_{G_{2}}(w^{\prime},v^{\prime })-2\eta\] \[=d_{G_{2}}(\phi(w),\phi(v_{j}))-2\eta,\] which is a contradiction. In this case, we define \(\psi(v)=v^{\prime}\in G_{2}\). If \(k=2\) when \(N_{v}\) is a path, by a similar argument, we have that \(C_{v}\) is a path. We set \(\psi(v)=\phi(v)\). An illustration is given in Fig. 7. For each \(v\in V_{1}\), we now enlarge the neighborhood and consider its \(\epsilon/6\)-neighborhood \(N_{v}^{\prime}\). It does not contain another vertex and hence is also a star graph. Moreover, if \(v\neq u\in V_{1}\), then \(N_{v}^{\prime}\cap N_{u}^{\prime}=\emptyset\) for otherwise \(d_{G_{1}}(u,v)\leq\epsilon/3\), which is impossible. We may similarly consider the \(\epsilon/6\)-neighborhoods \(C_{u}^{\prime},C_{v}^{\prime}\) of \(\psi(u)\) and \(\psi(v)\). Both \(C_{u}^{\prime}\) and \(C_{v}^{\prime}\) do not contain any vertex in \(V_{2}\) with degree \(\neq 2\). As \(N_{v}^{\prime}\) and \(C_{v}^{\prime}\) are star graphs with the same number of branches, there is an isometry (also denoted by \(\psi:N_{v}^{\prime}\to C_{v}^{\prime}\) such that \(d_{G_{2}}(\psi(w),\phi(w))\leq 2\eta\). By disjointedness of \(\epsilon/6\) neighborhoods, we may combine all the maps above together to obtain \(\psi:\cup_{v\in V_{1}}N_{v}^{\prime}\to\cup_{v\in V_{1}}C_{v}^{\prime}\). For the rest of \(G_{1}\), consider any edge \((u,v)\in E_{1}\). Without loss of generality, let \(u_{1}\) and \(v_{1}\) be the leaves of \(N_{u}^{\prime}\) and \(N_{v}^{\prime}\) contained in \((u,v)\). We claim that the shortest open path connecting \(\psi(u_{1})\) and \(\psi(v_{1})\) is disjoint from \(\cup_{v\in V_{1}}C_{v}^{\prime}\). For otherwise, \(d_{G_{1}}(u_{1},v_{1})\geq 2\epsilon/3\), while \(d_{G_{2}}(\phi(u_{1}),\phi(u_{2}))\leq d_{G_{2}}(\psi(u_{1}),\psi(u_{2}))-4\eta \geq\epsilon/2+2\epsilon/6-4\eta\). Therefore, \(2\epsilon/3-2\eta\geq 5\epsilon/6-4\eta\), which is impossible as \(\eta\ll\epsilon\). Let \(P_{u,v}\) and \(Q_{u,v}\) be the shortest paths connecting \(u_{1},v_{1}\) and \(\psi(u_{1}),\psi(v_{1})\) respectively (illustrated in Fig. 8). Then the length of \(P_{u,v}\) and \(Q_{u,v}\) differ at most by \(4\eta\). We may further extend \(\psi:P_{u,v}\to Q_{u,v}\) by a linear scaling such that \(d_{G_{2}}(\psi(w),\phi(w))\leq 3\eta\) for \(w\in P_{u,v}\). For different edges \((u,v),(u^{\prime},v^{\prime})\), it is apparent \(Q_{u,v}\cap Q_{u^{\prime},v^{\prime}}\) are disjoint, as the minimal distance between points on \(P_{u,v}\) and \(P_{u^{\prime},v^{\prime}}\) is at least \(\epsilon/3\). Therefore, we obtain a continuous injection \(\psi:G_{1}\to G_{2}\), which maps homeomorphically onto its image. We claim that \(\psi\) is onto. If not, there is a vertex \(v^{\prime}\in V_{2}\) that is not in \(\psi(V_{1})\) but it has a neighboring vertex \(u^{\prime}=\psi(u)\). However, this implies that the degree of \(u^{\prime}\) is strictly larger than that of \(u\), which is impossible as we have shown. In summary, \(\psi:G_{1}\to G_{2}\) is a homeomorphism such that \(|d_{G_{1}}(u,v)-d_{G_{2}}(u,v)|\leq 6\eta\) for any \(u,v\in G_{1}\). Moreover, \(\psi\) is piecewise linear whose gradient \(\psi^{\prime}\) is \(1\) in the interior of \(N_{v}^{\prime},v\in V_{1}\) and satisfies \[\frac{\frac{\epsilon}{6}-6\eta}{\frac{\epsilon}{6}}\leq\psi^{ \prime}(w)\leq\frac{\frac{\epsilon}{6}+6\eta}{\frac{\epsilon}{6}}, \tag{20}\] for \(w\) contained in the interior of some \(P_{u,v},(u,v)\in E_{1}\). We are ready to estimate \(|\delta_{G_{1},1}-\delta_{G_{2},1}|\). Let \(|G_{i}|\) be the total edge weights of \(G_{i},i=1,2\). For convenience, we denote a typical tuple \((u,v,w,t)\in G_{1}^{4}\) as a vector \(\mathbf{v}\), and \((\psi(u),\psi(v),\psi(w),\psi(t))\) by \(\boldsymbol{\psi}(\mathbf{v})\). The map \(\boldsymbol{\psi}:G_{1}^{4}\to G_{2}^{4},\mathbf{v}\mapsto\boldsymbol{\psi} (\mathbf{v})\) inherits the properties of its counterpart \(\psi\), which is a piecewise linear homeomorphism. In particular, its Jacobian \(J(\mathbf{v})\) is defined almost everywhere. Using Definition 2, Fig. 7: Illustration of \(\psi\). we have: \[\begin{split}&|\delta_{G_{1},1}-\delta_{G_{2},1}|\\ &=\Bigg{|}\int_{\mathbf{v}\in G_{1}^{4}}|G_{1}|^{-4}\tau_{G_{1}}( \mathbf{v})\,\mathrm{d}\mathbf{v}\\ &\qquad-\int_{\mathbf{v}\in G_{1}^{4}}|G_{2}|^{-4}J(\mathbf{v}) \tau_{G_{2}}(\mathbf{\psi}(\mathbf{v}))\,\mathrm{d}\mathbf{v}|\\ &\leq\sup_{\mathbf{v}\in G_{1}^{4}}|\tau_{G_{1}}(\mathbf{v})- \frac{|G_{1}|^{4}}{|G_{2}|^{4}}J(\mathbf{v})\tau_{G_{2}}\big{(}\mathbf{\psi}( \mathbf{v})\big{)}\Bigg{|}.\end{split} \tag{21}\] Similar to (19), we estimate \[\sup_{\mathbf{v}\in G_{1}^{4}}|\tau_{G_{1}}(\mathbf{v})-\tau_{G_{2}}\big{(}\bm {\psi}(\mathbf{v})\big{)}|\leq 24\eta. \tag{22}\] Moreover, we have seen in the proof that \(\psi\) can only have distortion when restricted to \(P_{u,v}\) for \((u,v)\in E_{1}\). As \[\frac{\frac{2e}{3}-6\eta}{\frac{2e}{3}}\leq|P_{u,v}|/|Q_{u,v}|\leq\frac{\frac{2 e}{3}+6\eta}{\frac{2e}{3}},\] the same bounds holds for \(|G_{1}|/|G_{2}|\). Both upper and lower bounds can be arbitrarily close to \(1\) if \(\eta\) is small enough. Similarly, by (20), \(J(\mathbf{v})\) as a fourth power of \(\psi^{\prime}\) can also be made arbitrarily close to \(1\). In conjunction with (21) and (22), \(|\delta_{G_{1},1}-\delta_{G_{2},1}|\) can be arbitrarily small if \(\eta\) is chosen to be small enough. This proves that \(\delta_{G,1}\) is continuous in \(G\).
2307.16220
Optimizing the Neural Network Training for OCR Error Correction of Historical Hebrew Texts
Over the past few decades, large archives of paper-based documents such as books and newspapers have been digitized using Optical Character Recognition. This technology is error-prone, especially for historical documents. To correct OCR errors, post-processing algorithms have been proposed based on natural language analysis and machine learning techniques such as neural networks. Neural network's disadvantage is the vast amount of manually labeled data required for training, which is often unavailable. This paper proposes an innovative method for training a light-weight neural network for Hebrew OCR post-correction using significantly less manually created data. The main research goal is to develop a method for automatically generating language and task-specific training data to improve the neural network results for OCR post-correction, and to investigate which type of dataset is the most effective for OCR post-correction of historical documents. To this end, a series of experiments using several datasets was conducted. The evaluation corpus was based on Hebrew newspapers from the JPress project. An analysis of historical OCRed newspapers was done to learn common language and corpus-specific OCR errors. We found that training the network using the proposed method is more effective than using randomly generated errors. The results also show that the performance of the neural network for OCR post-correction strongly depends on the genre and area of the training data. Moreover, neural networks that were trained with the proposed method outperform other state-of-the-art neural networks for OCR post-correction and complex spellcheckers. These results may have practical implications for many digital humanities projects.
Omri Suissa, Avshalom Elmalech, Maayan Zhitomirsky-Geffet
2023-07-30T12:59:06Z
http://arxiv.org/abs/2307.16220v1
# Optimizing the Neural Network Training for OCR Error Correction of Historical Hebrew Texts ###### Abstract Over the past few decades, large archives of paper-based documents such as books and newspapers have been digitized using Optical Character Recognition. This technology is error-prone, especially for historical documents. To correct OCR errors, post-processing algorithms have been proposed based on natural language analysis and machine learning techniques such as neural networks. Neural network's disadvantage is the vast amount of manually labeled data required for training, which is often unavailable. This paper proposes an innovative method for training a light-weight neural network for Hebrew OCR post-correction using significantly less manually created data. The main research goal is to develop a method for automatically generating language and task-specific training data to improve the neural network results for OCR post-correction, and to investigate which type of dataset is the most effective for OCR post-correction of historical documents. To this end, a series of experiments using several datasets was conducted. The evaluation corpus was based on Hebrew newspapers from the JPress project. An analysis of historical OCRed newspapers was done to learn common language and corpus-specific OCR errors. We found that training the network using the proposed method is more effective than using randomly generated errors. The results also show that the performance of the neural network for OCR post-correction strongly depends on the genre and area of the training data. Moreover, neural networks that were trained with the proposed method outperform other state-of-the-art neural networks for OCR post-correction and complex spellcheckers. These results may have practical implications for many digital humanities projects. OCR Post-correction, Neural Networks, Hebrew Historical Newspapers, Digital Humanities. ## 1 Introduction Over the last few decades, massive digitization of historical document collections has been performed using OCR techniques. As a result, large digital repositories have been created, e.g., the Library of Congress's historical digital collection [20] and the British Newspaper Archive [15] with various discovery tools (e.g., [6]). Even commercial enterprises have initiated large-scale OCR projects like Google Books [16]. An OCR algorithm processes a high-resolution image of the resource (e.g., a book or newspaper page) and converts it into text. Unfortunately, OCR output for historical documents is often inaccurate. OCR errors, sometimes called spelling mistakes, come in several forms: insertions, deletions, substitutions, transposition of characters, splitting and combining of words [11]. Digitization is essential for preservation and increasing the accessibility and research of cultural heritage. Thus, in many digital humanities projects which use digitized historical collections, there is a need to search and automatically analyze the text of the documents. However, OCR errors undermine the research and preservation efforts. Therefore, improving the quality of the OCR technology has recently become a critical task. Numerous studies applied machine learning techniques to correct OCR errors [1, 10]. One of the most effective machine learning approaches is deep learning based on multi-layer neural networks, which have been successfully applied in many document processing tasks, including the spellchecking for modern texts [10]. However, the utilization of neural networks for OCR error correction in historical documents is still underexplored in previous research [1]. Particularly, there is no available effective neural network model for fixing OCR errors in historical Hebrew newspapers. Hence, the primary goal of this research is to develop an effective methodology for designing an optimized neural network for OCR post-correction for Hebrew historical texts with a minimal amount of manually created training data. Neural networks are a componential model built using "neurons" in "layers". Each neuron gets an input, performs a mathematical calculation, and transfers its result (output) to other neurons. The first layer receives the task's input, which is transferred through the network, and the last layer's output is the predicted result of the network [13, 14]. The main advantage of neural networks is their ability to automatically calculate the optimal representative feature set for the given task rather than relying on manually selected features. As a baseline of the study, we used the neural network model from Ghosh & Kristensson [3] that was designed for OCR post-correction. This network was based on the Gated Recurrent Units (GRU) [2] architecture. We also tested the Long Short Term Memory (LSTM) [5] architecture, which was found effective in various NLP (natural language processing) tasks [9]. To build an optimal model for a specified task, a neural network has to be trained on a certain dataset for which both the input (OCRed text) and the target data (correct golden standard text) are provided. In this study, we investigated the influence of the training dataset characteristics on the network's performance. In particular, we experimented with different types of training datasets from various genres (secular literature vs. the Bible) and historical periods (from the last two centuries, ancient and modern), as well as with different types of OCR errors (random OCR errors vs. language and corpus specific OCR errors). Finally, we compared and analyzed the accuracy of the obtained networks in OCR error correction of Hebrew historical newspapers from the JPress corpus [21]. Methodology ### Dataset Generation The evaluation dataset of the study (JP_CE) was created from 150 OCRed historical Hebrew newspapers articles randomly selected from JPress - the most extensive historical Hebrew newspapers collection, dated 1800-2015 [17]. The articles included OCR errors, which were manually fixed by 75 students. The students' corrections were double-checked by an expert to create a high-quality golden standard corpus. This dataset comprised of the original and corrected versions of the above 150 JPress articles was used to evaluate the networks' performance. Next, four different training datasets were generated as follows. Each dataset comprised two versions of the same texts - the artificially created OCRed text and its golden standard version. Two datasets were based on texts from the Ben Yehuda Project [18] (the Hebrew equivalent of the Gutenberg [19] project comprised of secular Hebrew literature mostly from the last two centuries and the Middle ages), and two others consisted of the Hebrew Bible text. Both the Ben-Yehuda and Bible texts were typed manually and are thus considered correct. Each of them belongs to a different time period and genre, while Ben-Yehuda's period (partially) overlaps with that of the JPress corpus. To create training sets with OCR errors, we intentionally inserted errors in each of the above corpora (Ben-Yehuda and the Bible) using two different methods. The first one was a random error generation procedure [11, 4], when randomly chosen characters in each line of the text were removed, replaced (with other randomly chosen characters), or inserted at a randomly selected position. As a result, BYP and BIBLE datasets were created (as shown in Table 1). The alternative approach was to insert language and corpus-specific OCR errors, automatically learned from the JPress newspaper collection, in addition to the random error generation. The pseudo-code of the error generation algorithm is displayed in Figure 1. As can be observed from Figure 1, first, the algorithm generates some language and corpus independent types of errors, such as the removal and insertion of characters and swapping between two consecutive characters at random positions. Next, the most common JPress-specific OCR errors are added according to their relative frequency of occurrence in the corpus. To learn the most common character confusion pairs, 70% of the original JP_CE corpus and its fixed golden standard version were compared using the Needleman-Wunsch alignment algorithm [8]. The most common OCR confusion errors, along with their frequencies in JP_CE, are shown in Table 2. The outcome of this method was the BYP-HEB and BIBLE-HEB datasets. \begin{table} \begin{tabular}{l l l l} \hline Dataset Name & Input Corpus & Target Golden Standard Corpus & Generation method \\ \hline JP\_CE & JPress – OCRed historical newspapers & Fixed JPress articles & Manually fixed \\ & The Ben Yehuda Project & The Ben Yehuda Project & Automatically inserted \\ & with random OCR errors & ject - books & errors \\ & The Ben Yehuda Project & The Ben Yehuda Project & Automatically inserted \\ & Hebrew JPress specific & ject - books & errors \\ & OCR errors & The Hebrew Bible & Automatically inserted \\ & OCR errors & from sefaria.org.il & errors \\ & The Bible with Hebrew & The Hebrew Bible & Automatically inserted \\ & JPress specific OCR errors & & errors \\ &rors & & \\ \hline \end{tabular} \end{table} Table 1: The study’s datasets \begin{table} \begin{tabular}{l c c} \hline Character & Fix & Frequency \\ \hline \(\overline{n}\) & \(\overline{n}\) & 499 \\ \(\overline{\tau}\) & \(\overline{\tau}\) & 306 \\ \(\underline{\lambda}\) & \(\underline{1}\) & 256 \\ \(\underline{\underline{\lambda}}\) & \(\underline{2}\) & 210 \\ \(\underline{\tau}\) & \(\underline{\tau}\) & 207 \\ \(\underline{\underline{\lambda}}\) & \(\underline{\underline{\underline{\underline{\underline{\underline{\underline{ \underline{\underline{\underline{\underline{\underline{\underline{\underline{\underline{\quad\quad\quad\quad\quad \ }}}}}}}}}}}}}\) & 194 \\ \(\overline{\overline{n}}\) & \(\overline{n}\) & 162 \\ \(\underline{1}\) & \(\overline{\tau}\) & 162 \\ \hline \end{tabular} \end{table} Table 2: Common OCR errors in Hebrew historical newspapers in JPress Figure 2 summarizes the proposed approach for constructing the neural network for OCR error correction. Figure 1: Random and JPress-specific OCR error generation algorithm. Figure 2: The study’s methodology diagram. ### Evaluation Measures To assess the quality of the results, two evaluation measures were used: 1) the character-based accuracy increase, and 2) the word-based overall accuracy of the text. The character-based increase in the text's accuracy is computed as a percentage of the errors fixed by the network out of the total number of OCR errors in the input text. The number of network's corrections is calculated as a difference between the Levenshtein's minimal edit distance [7], denoted as lev, of the input OCRed text from the )correct) golden standard version of the text, GS, and the minimal edit distance of the fixed text, Fixed, (after the network's corrections) from the golden standard text. The initial number of errors in the OCRed text is computed as the minimal edit distance between the OCRed text and the golden standard text. If a network has inserted more errors than it has fixed, the accuracy increase value is set to 0. More formally, we define acc-increase as follows: \[\begin{split}& acc-increase\\ &=\begin{cases}\frac{lev_{GS,OCRed}-lev_{GS,Fixed}}{lev_{GS,OCRed }}*100,&lev_{GS,OCRed}\geq lev_{GS,Fixed}\\ &0,&otherwise.\end{cases}\end{split} \tag{1}\] To estimate the accuracy of the given text at the word-level, Wunsch alignment algorithm [8] was applied to compare the evaluated text with its golden standard version. Then, the output of the alignment was processed to split the text into words using a standard set of delimiters. The word-based accuracy of the text compared to its golden standard version is assessed with the standard word accuracy measure [22]: \[WACC=\frac{N_{w}-\ I_{w}\pm S_{w}\pm D_{w}}{N_{w}}*100\ (2)\] where \(N_{w}\) is the total number of words in the evaluated text, \(S_{w}\) is the number of words in the evaluated text that are substituted with other words in the golden standard version of the text, \(D_{w}\) is the number of words in the evaluated text that are absent from the golden standard text, and \(I_{w}\) is the number of words which occur in the golden standard text, but are absent from the evaluated text. The word-based metric is crucial from the user perspective since users comprehend and search texts by whole words. ## 3 Results First, to select the most effective network model for the task, we comparatively evaluated the performance of the baseline GRU network [3] and an LSTM-based model with different hyperparameters. The optimized network was the bidirectional LSTM [12] with 4 layers, a dropout of 0.2, 500 units, an epoch size of 250,000, and a batch size of 256. The technical details of the network optimization procedure are beyond the scope of the paper. ### The Networks' Training and Validation To train and validate the networks, we divided each of the two datasets (BYP and BYP_HEB) described above into training (80%) and validation (20%) subsets. Then, two different networks were constructed and trained on the training subsets. The results of the networks' validation on the corresponding validation sets are presented in Figure 3. As can be observed from Figure 3, the network that was trained and validated on the BYP_HEB dataset achieved higher accuracy (94%) than the network trained and validated on BYP (85%). We concluded that training on the dataset with JPress-specific errors is more effective than training on the dataset with randomly generated errors. ### The Networks' Evaluation The networks' evaluation was performed by applying the two best networks (trained on BYP-HEB and BIBLE-HEB) to fix the JP_CE (historical newspapers from JPress) dataset. Note that the baseline word-based accuracy of the original evaluation dataset (JP_CE) was 48.984% (i.e., only about 49% of the words were correct before applying the networks). In addition to the two networks trained on historical texts (Ben-Yehuda and the Bible), we evaluated the performance of the state-of-the-art spellcheckers that were implemented by Google and Microsoft as deep neural networks, trained mostly on modern Hebrew texts. Interestingly, neither Google Docs nor Microsoft Word 2019 improved the text's accuracy. Their quality score was about 0%, since they have introduced as many errors as they have fixed. From an examination of 20% of randomly chosen texts, it seems that these spellcheckers fixed well non-real words, but failed on real words (that do not make sense in the context of the sentence). Non-real words always got a Figure 3: BYP and BYP_HEB validation accuracy fix, but not always a correct one. The spellcheckers were able to fix the following error types: * Characters' transposition * Redundant spacing * "Dirt" signs (smudges, actual dirt, damaged paper) * Real word spelling mistakes The evaluation results are presented in Table 3. The obtained results show the dependency of the network's effectiveness on the time period of the training dataset. When the network learns from the corpus written in a similar period, it achieves positive and much better results (around 4.5% character-based and 5.5% word-based accuracy increase), than networks trained on texts from substantially more distant periods (which demonstrated none or negative change in the accuracy). The best network (BYP_HEB) learned different types of corrections and successfully applied them on historical newspapers, including: * Fixing spelling mistakes * Fixing characters transposition * Removing redundant spacing * Adding spacing * Preserving the names of the entities * Removing "dirt" signs However, the majority of the errors were not fixed by the network, and in some cases, it even introduced new errors. This might be explained by genre and style-driven differences among the training (Ben-Yehuda corpus, literature) and the evaluation datasets (JP_CE, newspaper articles). \begin{table} \begin{tabular}{l l l} \hline \hline Network & Character-based Accuracy & Word Accuracy \\ & Increase & \\ \hline Neural Network & \%5.406 & \%53.472 \\ (BYP_HEB) & & \\ Google Docs spell & \(\sim\) 0\% & \%41.58 \\ checker & & \\ Microsoft Word spell & \(\sim\) 0\% & \%41.53 \\ checker & & \\ Neural Network & \(\sim\) 0\% & \(\sim\) 0\% \\ (BIBLE_HEB) & & \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of all the networks evaluated on JP_CE ## 4 Conclusions This work introduced a light-weight method to train neural networks for Hebrew OCR error post-correction. As demonstrated in the results section, there is a substantial benefit for generating a language and period-specific dataset for OCR post-correction. Interestingly, generating only a language-specific dataset using the Bible introduces more errors than corrections. It is similar to a time traveler from the biblical era trying to fix OCR errors of more modern texts. In addition, only 105 manually fixed articles were needed for the error generation algorithm for Hebrew historical newspapers, which is a minimal human effort compared to the vast amount of labeled training data typically required for a neural network. These results are another step towards creating automated error correction of historical Hebrew OCRed documents and historical-cultural preservation in general. Although the scope of this research was Hebrew, we believe the proposed methodology can be generalized to other languages. Researchers can use these results to reduce the complexity when designing neural networks for OCR post-correction and to improve the OCRed document correction process for many digital humanities projects.
2305.15622
GFairHint: Improving Individual Fairness for Graph Neural Networks via Fairness Hint
Given the growing concerns about fairness in machine learning and the impressive performance of Graph Neural Networks (GNNs) on graph data learning, algorithmic fairness in GNNs has attracted significant attention. While many existing studies improve fairness at the group level, only a few works promote individual fairness, which renders similar outcomes for similar individuals. A desirable framework that promotes individual fairness should (1) balance between fairness and performance, (2) accommodate two commonly-used individual similarity measures (externally annotated and computed from input features), (3) generalize across various GNN models, and (4) be computationally efficient. Unfortunately, none of the prior work achieves all the desirables. In this work, we propose a novel method, GFairHint, which promotes individual fairness in GNNs and achieves all aforementioned desirables. GFairHint learns fairness representations through an auxiliary link prediction task, and then concatenates the representations with the learned node embeddings in original GNNs as a "fairness hint". Through extensive experimental investigations on five real-world graph datasets under three prevalent GNN models covering both individual similarity measures above, GFairHint achieves the best fairness results in almost all combinations of datasets with various backbone models, while generating comparable utility results, with much less computational cost compared to the previous state-of-the-art (SoTA) method.
Paiheng Xu, Yuhang Zhou, Bang An, Wei Ai, Furong Huang
2023-05-25T00:03:22Z
http://arxiv.org/abs/2305.15622v1
# GFairHint: Improving Individual Fairness for Graph Neural Networks via Fairness Hint ###### Abstract. Given the growing concerns about fairness in machine learning and the impressive performance of Graph Neural Networks (GNNs) on graph data learning, algorithmic fairness in GNNs has attracted significant attention. While many existing studies improve fairness at the group level, only a few works promote individual fairness, which renders similar outcomes for similar individuals. A desirable framework that promotes individual fairness should (1) balance between fairness and performance, (2) accommodate two commonly-used individual similarity measures (externally annotated and computed from input features), (3) generalize across various GNN models, and (4) be computationally efficient. Unfortunately, none of the prior work achieves all the desirabes. In this work, we propose a novel method, _GFairHint_, which promotes individual fairness in GNNs and achieves all aforementioned desirabes. GFairHint learns fairness representations through an auxiliary link prediction task, and then concatenates the representations with the learned node embeddings in original GNNs as a _"fairness hint"_. Through extensive experimental investigations on five real-world graph datasets under three prevalent GNN models covering both individual similarity measures above, GFairHint achieves the best fairness results in almost all combinations of datasets with various backbone models, while generating comparable utility results, with much less computational cost compared to the previous state-of-the-art (SoTA) method. + Footnote †: journal: 0 or below a certain threshold. We then learn a fairness representation for each node from the constructed fairness graph via link prediction, where we encourage the model to recover randomly masked edges. The learned fairness representation is then used as _fairness hint_ to be concatenated with the node embeddings from the original graph, which is trained in parallel to maximize utility with the main GNN model. We feed the concatenated representations to multilayer perceptrons (MLPs) to make fair and accurate predictions. GFairHint focuses on learning an additional fair representation and is compatible with various GNN model designs. It is orthogonal and complementary to a strategy adopted by most previous work (Kang et al., 2018; Li et al., 2019; Li et al., 2019), i.e., adding a fairness regularization term to the training objective. Meanwhile, GFairHint is a light-weighted framework. It is more computationally efficient and especially works better with large network datasets in terms of computation cost and both utility and fairness performance, while previous work either takes much longer training time (Kang et al., 2018) or requires large memory allocation, and therefore may fail on some large datasets (Li et al., 2019). Furthermore, GFairHint can benefit from these existing methods with fairness regularized objective functions when integrated together to further improve the performance. To demonstrate the effectiveness of our proposed method, we conduct extensive empirical evaluations on five datasets for node classification. These datasets apply either continuous similarity measures derived from the input space or binary measures provided by external annotators to quantify the similarity between individuals. Additionally, we experiment with three popular GNN backbone models, resulting in a total of 15 models \(\times\) dataset comparisons. Our GFairHint framework consistently outperforms other methods in terms of fairness, achieving the best results in 12 out of the 15 comparisons. Furthermore, in 9 out of the 15 comparisons, GFairHint achieves the best utility performance, while in the remaining comparisons, it achieves comparable utility results. We summarize our main contributions as follows: * **Plug-and-play Framework**: We present GFairHint, a plug-and-play framework for enhancing individual fairness in GNNs. This framework learns a fairness hint through an auxiliary link prediction task. We provide a theoretical analysis on the fairness hint and prove that for any two nodes, the learned hints are individually fair. * **Satisfied Desiderata**: GFairHint is compatible with two distinct settings for similarity measures among individuals and can achieve comparable accuracy while generating more individually fair predictions. Additionally, the method is computationally efficient and seamlessly integrates with various model designs. * **Rigorous Experiments**: We empirically show that the proposed method achieves the best fairness results in most comparisons (12/15), with the best utility results in the 9/15 comparisons, and comparable utility performance in the other comparisons. ## 2. Related Work Fairness for Graph-structured DataMost previous efforts focus on promoting group fairness in graphs (Kang et al., 2018; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019), which Figure 1. The proposed individual fairness promotion framework, GFairHint. Colored rectangles denote the representations of corresponding nodes (i.e., individuals). GFairHint learns the fairness hint from the fairness graph and concatenates it with the utility node embedding from the original backbone GNN model. Finally, it feeds the concatenated node embedding into an MLP to make fair predictions. The loss function for GFairHint can be a single utility loss (cross-entropy loss) or a combination of utility loss and fairness loss (e.g., ranking-based loss). encourages the same results across different sensitive groups (e.g., demographics). Another line of research work is on counterfactual fairness (Zhou et al., 2017; Zhang et al., 2018), which aims to generate the same prediction results for each individual and its counterfactuals. Few research studies individual fairness in graphs (Zhou et al., 2017; Zhang et al., 2018). Individual fairness intends to render similar predictions to similar individuals for a specific task. Kang et al. (2017) propose a framework called InFoRM (Individual Fairness on Graph Mining) to debias a graph mining pipeline from the input graph (preprocessing), the mining model (processing), and the mining results (postprocessing), but not specifically for GNN models. Song et al. (2018) identify a new challenge to enforce individual fairness informed by group equality. Promoting group and individual fairness at the same time requires group information, which is not available in some real-life scenarios, such as the academic networks studied in this paper. The work closest to ours is REDRESS (Kang et al., 2017). They propose to model individual fairness from a ranking-based perspective and design a ranking-based loss accordingly. However, their method does not generalize well when the similarity measure is binary, especially binary, because calculating the ranking-based loss requires to rank the individuals based on the similarity values but with binary similarity measures (i.e., many individuals on the same similarity level) the rankings are not as informative as in the cases with continuous similarity measures. Moreover, despite their effort on reducing the computation cost and the effectiveness of the ranking-based loss, high computational cost is unavoidable when computing the rank. Individual FairnessThere are other works focusing on individual fairness, but not specifically for graph-structured data. The definition of individual fairness, similar predictions for similar individuals, can be formulated by the Lipschitz constraint, which inspires works such as Pairwise Fair Representation (PFR) (Zhou et al., 2017) to learn fair representation as input. This can be considered as a pre-processing method for GNN models. However, the transformation from the original input feature to fair representation may distort the original information in the input features and cause a detriment in the utility performance of the model (Kang et al., 2017). Because it is computationally difficult to enforce Lipschitz constraint, Yurochkin and Sun (Yurochkin and Sun, 2018) propose an in-process method with a lifted constraint and Petersen et al. (Petersen et al., 2019) propose a post-processing method with Laplacian smoothing. In this work, we compare with REDRESS (Kang et al., 2017) and adapt PFR (Zhou et al., 2017) and InFoRM (Kang et al., 2017) to work with GNN models. We do not compare with Dwork et al. (Zhou et al., 2017) because their contribution is mainly conceptual and it was not included as baseline in previous works (Kang et al., 2017; Zhang et al., 2018; Zhang et al., 2018). ## 3. Proposed Method - Gfairhint ### Problem Formulation The generic definition of individual fairness is _individuals who are similar should have similar outcomes_(Zhou et al., 2017). We can formulate the similarity between individuals with an **oracle similarity matrix**\(\mathcal{S}_{F}\), where the value of \((i,j)\)-th entry is the similarity between the node \(i\) and \(j\). We follow the same setting from previous works (Kang et al., 2017; Zhang et al., 2018; Zhang et al., 2018), where the oracle similarity matrix \(\mathcal{S}_{F}\) is given apriori (annotated by external experts or calculated by input features). Depending on the definition of similarity measure, the entry of \(\mathcal{S}_{F}\) could be continuous or binary. For graph data and GNN models, we denote the **outcome similarity matrix** as \(\mathcal{S}_{\hat{Y}}\) with the predicted outcome \(\hat{Y}\) from GNN model, where the \((i,j)\)-th entry is the similarity between the embeddings \(z_{i}\) and \(z_{j}\) of the nodes \(i\) and \(j\) from the last model layer. Inspired by the individual fairness definition from the previous work (Zhou et al., 2017; Zhang et al., 2018), we define the fair condition of GNN model as follows: Definition 1.: _Let \(x_{i}\) and \(x_{j}\) be two nodes in a graph. The output of the model \(f(x_{i})\) and \(f(x_{j})\) are **individually fair** w.r.t. to the node similarity measure \(S\) and output distance measure \(D\) if the following condition holds._ \[D(f(x_{i}),f(x_{j}))\leq\frac{\epsilon}{S(x_{i},x_{j})}\forall x,y=1,..,n \tag{1}\] _where \(\epsilon>0\) is a constant for fairness tolerance and \(0<S(x_{i},x_{j})<1\) if \(i\neq j\). \(n\) is the number of nodes in a graph. We define \(\frac{\epsilon}{S(x_{i},x_{j})}\) as the **fairness bound**._ With this definition, our goal to promote the individual fairness can be achieved by narrowing the difference between \(\mathcal{S}_{F}\) and \(\mathcal{S}_{\hat{Y}}\). ### Overall Structure To achieve a good balance between utility and fairness, GNN models need to utilize input information from both sides. We first apply a representation learning method to extract the fairness information from the input and add the learned fairness representation to the original GNN model to promote individual fairness. Our proposed **GFairHint** framework, as shown in Figure 1, consists of three steps. First (Section 3.3), we construct an unweighted fairness graph, \(\mathcal{G}_{F}\), with the same set of nodes in the original input graph. The undirected edges of \(\mathcal{G}_{F}\) represent that two nodes have a high similarity value in \(\mathcal{S}_{F}\). Next (Section 3.4), we obtain the individual fairness hint through a representation learning method that learns fairness representations for the nodes in \(\mathcal{G}_{F}\). Specifically, the representation learning model predicts whether two nodes in \(\mathcal{G}_{F}\) have an edge through a GNN link prediction model whose final hidden layer output is used as the fairness hint. Finally (Section 3.5), we concatenate the node fairness hint with the learned node embedding of the GNN for original tasks and use the joint embedding for further training. ### Construction of Fairness Graph To extract fairness information for each individual via a link-prediction-based representation learning method, we first construct a fairness graph \(\mathcal{G}_{F}\) based on the apriori oracle similarity matrix \(\mathcal{S}_{F}\). Note that \(\mathcal{S}_{F}\) can be given by incorporating various types of data sources and our fairness graph construction complies with the data sources of \(\mathcal{S}_{F}\). Here, we show two commonly utilized data sources, i.e., external annotation and input feature. Oracle Similarity Matrix based on External AnnotationConstructing \(\mathcal{S}_{F}\) is straightforward when external pairwise judgments are available on whether two individuals \(i,j\) should be treated similarly given a specific task (Zhou et al., 2017). The entry \(\mathcal{S}_{I}^{F}\) in \(\mathcal{S}_{F}\) is \(1\) when individual \(i\) and \(j\) are labeled as similar, and \(0\) otherwise. In this case, \(\mathcal{S}_{F}\) is the adjacency matrix for the fairness graph \(\mathcal{G}_{F}\). An alternative type of judgments is to map individuals into binary equivalence classes. A pair of individuals \(i,j\) is linked in the fairness graph, \(\mathcal{G}_{F}\), only if they belong to the same class (Kal We extract the fairness hint \(v_{i}^{f}\) using the learned fairness representation learning model before training the original GNN model, and the fairness hint is fixed during the optimization process. For node classification tasks, we apply _softmax_ to the final node embedding \(z_{i}\in\mathbb{R}^{c}\) to obtain the predictions where \(c\) is the number of classes and apply \(\mathcal{L}_{utility}\) to optimize the parameters in the backbone GNN models. We empirically show that the fairness hint is fully involved in models' decision-making process via gradient-based interpretability method in Section 5.2 and the MLP layers can learn to balance between utility and fairness hint when we integrate the loss function with fairness regularization which we introduce next. ### Extension: Integration with Fairness Loss Our GFairHint framework can simply utilize the utility loss \(\mathcal{L}_{utility}\) as the final loss. Moreover, we can further encourage the model to learn fairness information by adding the "fairness" loss into the final loss function. Previous fairness promotion methods have designed various fairness loss to enforce a good balance between utility and fairness (Garfinkel, 2015; Garfinkel, 2015), which are complementary to our fairness hint. In our work, we integrate the ranking-based fairness loss in REDRESS (Garfinkel, 2015) to our framework GFairHint. The objective of the loss is to minimize the difference between the oracle similarity matrix \(\mathcal{S}_{F}\) and the outcome similarity matrix \(\mathcal{S}_{\hat{\mathcal{V}}}\). For each node, when \(\mathcal{S}_{F}\) is based on input features, we can obtain two top-k ranking lists derived from \(\mathcal{S}_{F}\) and \(\mathcal{S}_{\hat{\mathcal{V}}}\) respectively. The fairness loss of node \(i\) can be calculated as \[\hat{P}_{j,m}(i)=\frac{1}{1-e^{(\hat{z}_{i,j}-\hat{z}_{i,m})}} \tag{6}\] \[P_{j,m}(i)=\begin{cases}1&\text{if }s_{i,j}>s_{i,m}\\ 0.5&\text{if }s_{i,j}=s_{i,m}\\ 0&\text{if }s_{i,j}<s_{i,m}\end{cases}\] (7) \[\mathcal{L}_{j,m}(i)=-P_{j,m}\log\hat{P}_{j,m}-(1-P_{j,m})(1-\log \hat{P}_{j,m})\] (8) \[\mathcal{L}_{fairness}(i)=\sum_{j,m}\mathcal{L}_{j,m}(i)|\Delta _{\hat{\text{G}}\in\hat{\text{k}}}|_{j,m} \tag{5}\] where \(z_{\hat{\text{G}}\in\hat{\text{k}}}(\cdot,\cdot)\) is the similarity metric (NDCG or ERR (Garfinkel, 2015; Garfinkel, 2015)) between the two top-k ranking lists and node \(j\) or \(m\) are selected from the ranking lists. Specifically, \(s_{i,j}\) and \(\hat{s}_{i,j}\) are the entries from \(\mathcal{S}_{F}\) and \(\mathcal{S}_{\hat{\mathcal{V}}}\) respectively. However, when \(\mathcal{S}_{F}\) is from external annotations where only pairwise judgements are available, the entries \(S_{ij}^{F}\) in \(\mathcal{S}_{F}\) are binary. Therefore, it is meaningless to calculate the ranking-based loss of the constructed fairness graph \(\mathcal{G}_{F}\) in this case. In this work, to apply REDRESS related models to fairness datasets with external annotations, we made an adjustment by replacing \(\mathcal{S}_{F}\) from external annotations with the one derived from input features. Details will be discussed in Section 4.2. The total fairness loss \(\mathcal{L}_{fairness}\) is then calculated as the sum of the fairness loss on each node. The final objective is to combine the fairness loss and the utility loss. \[\mathcal{L}_{total}=\mathcal{L}_{utility}+\gamma\mathcal{L}_{ fairness} \tag{9}\] where \(\gamma\) is an adjustable hyperparameter. By changing the value of \(\gamma\), we can control the weight of fairness and utility during training according to the task requirement. The details discussion is in Section 5.3. ## 4. Experiment Setup ### Dataset Collection In our work, we focus on the node classification task to evaluate the fairness promotion ability of our proposed GFairHint. We collect five real-word datasets to assess the model performance in multiple domains (see statistics in Table 1). Coauthor-CS (CS) and Coauthor-Phy (Phy) are two co-authorship network datasets (Zhu et al., 2017), where each node represents an author, and they connect the nodes if two authors have published a paper together. ACM is a dataset of citation network (Zhu et al., 2017), where each node represents a paper, and the edge denotes the citation relationship. These three datasets (ACM, CS, Phy) are also applied in the REDRESS paper as the experiment benchmarks (Garfinkel, 2015). In addition, we use another citation network OGRN-ArXiv dataset (Krizhevsky et al., 2015), which is several magnitude orders larger than the ACM, CS, and Phy datasets. For ACM, CS, and Phy datasets, we follow the preprocessing procedure in REDRESS and use the bag-of-word model to transform the title and abstract of a paper as its node feature. We use the pre-split training, validation, and test datasets from the REDRESS paper1. Regarding the ArXiv dataset, we directly use the processed 128-dimensional feature vectors from a pre-trained skip-gram model (Zhu et al., 2017). We then follow the train/validation/test splits from the official release of Open Graph Benchmark (OGB).2 We repeat the experiments for each model setting twice, because the split of the dataset is fixed by the previous work. Since the citation and co-authorship network datasets do not contained human annotated similarity, we follow previous work (Garfinkel, 2015) and use the cosine similarities between node features as the entries in \(\mathcal{S}_{F}\). 3 Footnote 1: [https://github.com/yushundong/REDRESS/tree/main/node#20classification/data](https://github.com/yushundong/REDRESS/tree/main/node#20classification/data) Footnote 2: [https://gob.stanford.edu/docs/node/prop/](https://gob.stanford.edu/docs/node/prop/) Footnote 3: We also tested with euclidean distance. We do not develop or evaluate different individual fairness measures but rather show that our method is compatible with various existing measures. Additionally, we curate a dataset with external human annotation on individual fairness similarity in the binary setting to demonstrate our framework's compatibility when external annotation is available. The Crime dataset (Zhu et al., 2017) consists of socioeconomic, demographic and law / police data records for neighborhoods in the US. We follow Lahoti et al. (Lahoti et al., 2017) for most of the preprocessing and introduce additional information on the geometric adjacency of the country4 to form a graph-structured dataset. The nodes are the neighborhoods, and the edges indicate that two neighborhoods reside in the same country or adjacent countries. We have a binary outcome variable for whether the neighborhood is violent and consider other data records as input features. For the similarity measure of individual fairness, we also follow (Lahoti et al., 2017) to collect human reviews on Crime & Safety for neighborhoods in the U.S. from a neighborhood review website, Niche.5 The judgments are given in the form of 1-star to 5-star ratings by current and past residents of these neighborhoods. We then use aggregated mean ratings to construct the fairness graph as described in Section 3.3, where the neighborhoods with the same rating level (e.g., 5 stars) are linked in the fairness graph. As there is no predefined train/validation/test split, we randomly split the dataset and repeat five times for each model setting. ### Methods for Comparison To show the superiority of our proposed framework, we implement the vanilla GNN models and previous SOTA as baseline models with sensitivity analysis. Note that some existing works (Beng et al., 2019; Liu et al., 2020) for group fairness promotion cannot be used as baseline models because our work focuses on individual fairness. We explain these **baseline methods** below. **Vanilla**: Vanilla denotes the vanilla GNN models without any individual fairness promotion method. **PFR**: PFR (Zhou et al., 2019) learns fair representation and can be considered as a pre-processing method for GNN models, and we adapt the implementation of PFR6 to GNN models by transforming the input features to the fairness representations and use the transformed representations as node features in vanilla GNN models. The PFR method requires computing the Laplacian matrix and the eigenvectors of the oracle similarity matrix \(\mathcal{S}_{F}\)(Zhou et al., 2019), so we have to explicitly store \(\mathcal{S}_{F}\) in memory. The experiment of PFR on the Arxiv dataset with 90,941 nodes will cause out-of-memory issues. Since Dong et al. (2018) has already shown REDRESS's priority over PFR on the ACM, Coauthor-Phy and Coauthor-CS datasets, we only experiment with PFR on the Crime dataset. Footnote 6: [https://github.com/plahoti-Igtm/PairwiseFairRepresentations](https://github.com/plahoti-Igtm/PairwiseFairRepresentations) **InFoRM**: InFoRM (Liu et al., 2020) is a framework to promote individual fairness in conventional machine learning tasks on graphs. Similarly, due to the previous work's experiments showing the better performance of REDRESS than InFoRM on ACM, CS and Phy datasets, we only experiment with InFoRM on the Crime and Arxiv datasets by combining its fairness promotion loss with the utility loss of the GNN models. **REDRESS**: REDRESS is the previous SOTA framework for individual fairness promotion in GNN models (Deng et al., 2019). They formulate the conventional individual fairness promotion into a ranking-based optimization problem. By optimizing the ranking-based loss \(\mathcal{L}_{fairness}\) and the utility loss \(\mathcal{L}_{utility}\), REDRESS can achieve the goal of maximization of utility and promotion of individual fairness simultaneously. For the implementation of its framework and ranking-based loss, we adapt the codebase released by the authors7. Footnote 7: [https://github.com/yushundong/REDRESS](https://github.com/yushundong/REDRESS) **REDRESS + MLP** As mentioned in Section 3.5, after concatenating the utility node embeddings and fairness hint, our proposed framework GFairHint uses additional MLP layers to process the concatenated embeddings, which increases the model complexity. This variant of REDRESS adds the MLP layers with the same size after the GNN models along with the original REDRESS loss for a fair comparison. We use the output of MLP layers from this variation model to calculate the loss and optimize the parameters in the GNN and MLP layers. REDRESS + MLP model can show the effectiveness of GFairHint without interference of the model complexity confounder. **Our methods:** We study the performance of GFairHint and examine its effectiveness with the combination of REDRESS loss: **GFairHint**: We combine the fairness hint with the utility node embedding and only use the \(\mathcal{L}_{utility}\) loss to update the model parameters. **GFairHint + REDRESS**: As described in Section 3.6, we combine the ranking-based loss \(\mathcal{L}_{fairness}\) in REDRESS with the utility loss \(\mathcal{L}_{utility}\) to further encourage the models to learn individual fairness. The only difference between this method and REDRESS + MLP is that GFairHint + REDRESS incorporates the fairness hint. We note that for the Crime dataset, since the entries of oracle similarity matrix \(\mathcal{S}_{F}\) are binary (0-1), we cannot calculate the ranking-based loss of the constructed fairness graph \(\mathcal{G}_{F}\). Therefore, we adapt the REDRESS-related methods to calculate the ranking-based loss based on input feature similarity. As a result, for the Crime dataset, REDRESS and REDRESS + MLP do not have any access to the fairness information (i.e., fairness graph \(\mathcal{G}_{F}\)), while GFairHint + REDRESS get fairness information only through the fairness hint but not the ranking-based loss. ### Evaluation Metric We evaluate both the _utility performance_ and the _fairness performance_ of the models. A desired model should achieve the highest fairness performance without sacrificing much, if it must, utility performance. For utility performance, we follow previous work (Deng et al., 2019) and the official OGB leaderboard to use classification accuracy (ACC). As for fairness performance, we use different evaluation metrics in accordance with two different settings of the oracle similarity, i.e., continuous and binary. For the co-authorship and citation networks (ACM, ArXiv, CS, Phy), the oracle similarity matrix is continuous, where the entry is the input feature similarity. We follow previous work (Deng et al., 2019) to utilize ERR@K (Chen et al., 2019) and NDCG@K (Liu et al., 2020), where \(k\) is the same as the threshold of to determine the edge existence in fairness graph in Section 3.3 for \(\mathcal{S}_{F}\) based on input features. Higher ERR and NDCG values represent better individual fairness promotion. These two metrics measure the similarity between the ranking lists obtained from the oracle similarity matrix \(\mathcal{S}_{F}\) and the outcome similarity matrix \(\mathcal{S}_{\tilde{y}}\). In the following experiments, we choose \(k=10\) and report the results of NDCG@10 and ERR@10 for all models for comparison. We also provide the sensitivity analysis of difference choices of \(k\) values on the model performance in Appendix F. For the Crime dataset, where the entry of the oracle similarity matrix is binary, we follow (Zhou et al., 2019) and use **Consistency** as the evaluation metric. It measures the consistency of outcomes between individuals who are similar to each other. The formal definition regarding a fairness similarity matrix \(S_{F}\) is \[Consistency=1-\frac{\Sigma_{i}\Sigma_{j}|y_{i}-\hat{y_{j}}|\cdot S_{ij}^{F}}{ \Sigma_{i}\Sigma_{j}S_{ij}^{F}}\qquad\forall i\neq j.\] \begin{table} \begin{tabular}{l l c c c} \hline \hline \(\mathcal{S}_{F}\) Type & Dataset & \# Nodes & \# Features & \# Classes \\ \hline \multirow{3}{*}{Input Feature} & Coauthor-CS & 916 & 6,805 & 15 \\ & Coauthor-Phy & 1,724 & 8,415 & 5 \\ & ACM & 824 & 8,337 & 9 \\ & ArXiv & 90,941 & 128 & 40 \\ \hline External & Crime & 1,994 & 122 & 2 \\ \hline \hline \end{tabular} \end{table} Table 1. Statistics of the datasets used for node classification experiments. # Nodes stands for the number of nodes for training dataset. Consistency measures how the predictions align with the oracle fairness similarity \(S_{F}\). ERR and NDCG measure a similar concept from a ranking perspective. ### Implementation Details All the backbone GNN models and our auxiliary link prediction models are implemented in the Pytorch framework, especially the package PyTorch Geometric (Kingmare et al., 2014; Krizhevsky et al., 2014). For each of our five datasets, we experiment with two backbone GNN settings, small and large model size. For the small model size setting, the number of layers and the dimension of the embeddings in the hidden layers are set to 2 and 16. For the big model size setting, we set these two numbers to 10 and 128 respectively. For all experiments, we fix the values of the hyperparameters \(\gamma\) and \(k\) at 1 and 10 as suggested in the previous work (Krizhevsky et al., 2014), where \(\gamma\) is the weighting factor when integrating with the ranking-based loss (Equation 9) and \(k\) is the number of top entries used to calculate the ranking loss and fairness evaluation metrics NDCG@K and ERR@K. Our learned fairness hint for each node contains individual fairness information, and we expect that the fairness hint helps promote fairness in various GNN models. We choose three popular GNN models: GCN (Krizhevsky et al., 2014), GraphSAGE (Krizhevsky et al., 2014), and Graph Attention Networks (GAT) (Krizhevsky et al., 2014) to demonstrate the compatibility of GFAirHint with various GNN model designs. Note that we do not need to re-learn the fairness hint for the same dataset even if the backbone models have changed. Other details of the implementation are in Appendix A. ## 5. Experimental Results We aim to answer the questions in the following sections. Section 5.1: How well can GFAirHint and its extension method promote individual fairness compared with other baselines with different similarity measures? Section 5.2: How important are the fairness \begin{table} \begin{tabular}{l l l c c c} \hline \hline **Dataset** & **Backbone** & **Method** & **Fairness: NDCG@10** & **Fairness: ERR@10** & **Utility: ACC** \\ \hline \multirow{8}{*}{GCN} & & Vanilla & 75.01 \(\pm\) 0.19 & 91.45 \(\pm\) 0.01 & 70.19 \(\pm\) 0.02 \\ & & InFoRM & 74.66 \(\pm\) 0.05 & 91.47 \(\pm\) 0.09 & 68.86 \(\pm\) 0.68 \\ & & REDRESS & 75.25 \(\pm\) 0.71 & 91.57 \(\pm\) 0.10 & 68.65 \(\pm\) 1.13 \\ & & REDRESS + MLP & 75.01 \(\pm\) 0.26 & 91.47 \(\pm\) 0.01 & 69.85 \(\pm\) 0.27 \\ & & GfairHint & 81.93 \(\pm\) 0.27 & 94.28 \(\pm\) 0.09 & **70.62 \(\pm\) 0.91** \\ & & GFAirHint + REDRESS & **85.48 \(\pm\) 3.47** & **95.22 \(\pm\) 0.80** & 69.80 \(\pm\) 0.47 \\ \cline{2-6} & & Vanilla & 75.47 \(\pm\) 0.37 & 91.71 \(\pm\) 0.13 & **70.44 \(\pm\) 0.69** \\ & & InFoRM & 74.82 \(\pm\) 0.18 & 91.60 \(\pm\) 0.14 & 69.20 \(\pm\) 1.45 \\ ArXiv & SAGE & REDRESS & 75.51 \(\pm\) 0.73 & 91.58 \(\pm\) 0.16 & 69.34 \(\pm\) 0.55 \\ & & REDRESS + MLP & 74.45 \(\pm\) 0.53 & 91.40 \(\pm\) 0.09 & 69.75 \(\pm\) 0.18 \\ & & GFAirHint & 81.92 \(\pm\) 0.17 & 94.34 \(\pm\) 0.04 & 70.40 \(\pm\) 0.34 \\ & & GaiFirHint + REDRESS & **85.32 \(\pm\) 3.45** & **95.22 \(\pm\) 0.79** & 68.98 \(\pm\) 0.25 \\ \cline{2-6} & & Vanilla & 76.64 \(\pm\) 0.21 & 92.04 \(\pm\) 0.09 & 70.86 \(\pm\) 0.64 \\ & & InFoRM & 76.37 \(\pm\) 0.04 & 91.92 \(\pm\) 0.11 & 69.73 \(\pm\) 0.40 \\ & & REDRESS & 77.46 \(\pm\) 0.09 & 92.18 \(\pm\) 0.02 & 69.74 \(\pm\) 0.19 \\ & & REDRESS + MLP & 76.23 \(\pm\) 0.98 & 91.86 \(\pm\) 0.25 & 70.45 \(\pm\) 0.30 \\ & & GFAirHint & 81.80 \(\pm\) 0.14 & 94.20 \(\pm\) 0.04 & **71.06 \(\pm\) 0.45** \\ & & GFAirHint + REDRESS & **85.49 \(\pm\) 4.73** & **95.20 \(\pm\) 1.19** & 69.89 \(\pm\) 0.11 \\ \hline \multirow{8}{*}{GCN} & & Vanilla & 33.90 \(\pm\) 0.73 & 76.99 \(\pm\) 0.08 & **70.78 \(\pm\) 0.18** \\ & & REDRESS & 34.82 \(\pm\) 0.80 & 76.98 \(\pm\) 0.13 & 70.15 \(\pm\) 1.77 \\ \cline{2-6} & & REDRESS + MLP & 30.93 \(\pm\) 0.46 & 76.66 \(\pm\) 0.19 & 70.64 \(\pm\) 1.89 \\ & & GFAirHint & 35.12 \(\pm\) 0.34 & 76.39 \(\pm\) 0.52 & 69.70 \(\pm\) 0.77 \\ & & GFAirHint + REDRESS & **35.58 \(\pm\) 2.85** & **77.00 \(\pm\) 0.16** & 69.77 \(\pm\) 0.95 \\ \cline{2-6} & & Vanilla & 30.55 \(\pm\) 1.86 & 76.63 \(\pm\) 0.18 & 69.26 \(\pm\) 0.60 \\ & & REDRESS & 31.58 \(\pm\) 1.06 & 76.68 \(\pm\) 0.04 & 68.23 \(\pm\) 0.97 \\ ACM & SAGE & REDRESS + MLP & 28.73 \(\pm\) 0.12 & 76.29 \(\pm\) 0.78 & **69.32 \(\pm\) 0.44** \\ & & GFAirHint & 36.12 \(\pm\) 0.72 & 76.39 \(\pm\) 0.25 & 69.24 \(\pm\) 0.11 \\ & & GFAirHint + REDRESS & **37.83 \(\pm\) 3.78** & **77.37 \(\pm\) 0.55** & 67.52 \(\pm\) 0.16 \\ \cline{2-6} & & Vanilla & 34.62 \(\pm\) 0.28 & 77.00 \(\pm\) 0.20 & **71.14 \(\pm\) 1.14** \\ & & REDRESS & 34.83 \(\pm\) 0.45 & 77.40 \(\pm\) 0.28 & 70.49 \(\pm\) 0.87 \\ GAT & REDRESS + MLP & 32.82 \(\pm\) 1.36 & 76.22 \(\pm\) 0.09 & 69.87 \(\pm\) 0.70 \\ & & GFAirHint & 37.52 \(\pm\) 0.54 & 76.79 \(\pm\) 0.27 & 71.04 \(\pm\) 0.74 \\ & & GFAirHint + REDRESS & **43.01 \(\pm\) 2.02** & **77.50 \(\pm\) 0.36** & 69.65 \(\pm\) 0.88 \\ \hline \hline \end{tabular} \end{table} Table 2. Node classification results for citation datasets: ArXiv and ACM with cosine similarity as similarity measures. The number of layers and the hidden layer dimension of the backbone GNN models are 10 and 128 respectively. All values are reported in percentage. The first-best performance is marked in bold, and the second-best performance is underlined. hint for GNN models when making predictions? Section 5.3: How does the fairness constraint hyperparemter \(\gamma\) influence the performance of GFairHint? Section 5.4: How computationally efficient is GFairHint and how does it compare with other baselines? ### Effectiveness of GFairHint In this section, we present the results of our proposed methods and baselines on collected datasets. For each dataset, we choose model hyperparameter settings with better average utility between two large and small model size settings as described in Section 4.4. _Oracle Similarity Matrix based on Input Feature_. For the citation and coauthorship networks, we use the input feature similarity as the entry of the oracle similarity matrix to construct the fairness graph. The results are shown in Table 2 for the citation networks (i.e., Arxiv and ACM) and Table 5 Appendix B for the co-authorship networks (i.e., Phy and CS). 8 For utility performance, our proposed GFairHint and GFairHint + REDRESS models achieve comparable results with vanilla backbone GNN models and other fairness promotion models. Indeed, in 5 cases out of 6 experiments for co-authorship datasets, our models achieve the best utility performance. Regarding the fairness performance, our proposed models achieve the best fairness performance in nearly all settings, except for the ERR value of the GNN models on the Phy dataset. We also achieve comparable ERR values with the SoTA REDRESS model for the Phy dataset. Moreover, our proposed GairHint also behave better than the REDRESS model in the most scenarios of the citation network dataset. InFoRM method does not improve the fairness performance on the Arxiv dataset, which is consistent with the results on the other academic networks from the REDRESS paper (Fan et al., 2017). Footnote 8: We follow previous works (Fan et al., 2017; Chen et al., 2017) to use cosine similarity as the oracle similarity measure. We additionally include the results and discussion of using similarity measure based on Euclidean distance in Appendix B. _Oracle Similarity Matrix based on External Annotation_. For the Crime dataset, we construct a fairness graph from collected human expert judgements. We show the results in Table 3. We find that for all three backbone GNN models, GFairHint and GFairHint + REDRESS are the best two methods in the fairness (Consistency) evaluation. This is as expected since Vanilla and REDRESS models do not have access to fairness information in this setting, and InFoRM and PFR methods are not specifically designed for GNN models. GFairHint and GfairHint + REDRESS have close performance in consistency, demonstrating the effectiveness of fairness hint even when it is used alone. Although GFairHint + REDRESS has slightly better results, it has much higher computational cost because of the ranking-based loss. Detailed discussion on computation efficiency is in Section 5.4. For GCN and GAT backbone models, our proposed methods achieve the best two results in utility (accuracy) evaluation. _Summary._ We systematically evaluate the utility performance and fairness performance in \(5\times 3=15\) combinations of dataset and backbone model, which results in 15 utility comparisons and 27 fairness comparisons.9 Our proposed GFairHint + REDRESS method achieved best fairness performance in almost all comparisons (\(24/27\)), while GFairHint performed second best in \(16/27\) of the comparisons when applied alone. These two methods also have comparable utility performance with the Vanilla model, as they ranked top two in \(12/15\) utility comparisons. Although GFairHint + REDRESS achieved better fairness performance than GFairHint in general, the gaps are small. GFairHint even ranked higher in \(6/15\) utility performance, especially for large dataset (\(3/3\)). These observations empirically show that GFairHint achieves a good balance between utility and fairness, and demonstrate that while GFairHint is better than previous work on individual fairness promotion in most cases, it is also complementary to other methods with fairness regularization loss and can further improve the performance. Footnote 9: For the four academic networks with oracle similarity matrix based on input feature, we evaluated two fairness metrics, which leads to \((4\times 2+1)\times 3=27\) comparisons for fairness ### Importance of Fairness Hint In Section 3.4, we prove that our learned fairness hints are individually fair and integrate the fairness hints into the training of backbone GNN models to achieve better fairness promotion. We further evaluate: _Does GNN backbone model utilize the concatenated fairness hints in the node label prediction?_ We answer this question by borrowing the idea of saliency map (Zhu et al., 2017) to demonstrate the relative importance of utility node embedding, \(u\), and fairness hints, \(v^{f}\), when making predictions. We calculate the gradient of the model output w.r.t. each dimension of the node embedding as the importance score for the dimension. We then average the importance score for utility node embedding and fairness hint as \(\text{Score}(u)\) and \(\text{Score}(v^{f})\) respectively. The details are shown in Appendix D. \begin{table} \begin{tabular}{l l c c} \hline \hline **Backbone** & **Method** & **Consistency** & **Acc** \\ \hline \multirow{8}{*}{**GCN**} & Vanilla & \(54.80\pm 0.23\) & \(73.83\pm 0.34\) \\ & PFR & \(52.20\pm 0.55\) & \(71.53\pm 1.10\) \\ & InFoRM & \(56.84\pm 1.77\) & \(72.93\pm 0.96\) \\ & REDRESS & \(54.07\pm 0.96\) & \(73.98\pm 0.70\) \\ & REDRESS + MLP & \(53.06\pm 1.04\) & \(73.58\pm 1.80\) \\ & GFairHint & \(62.76\pm 2.74\) & \(75.44\pm 0.71\) \\ & GFairHint + REDRESS & \(\mathbf{63.61\pm 4.4}\) & \(\mathbf{75.54\pm 0.90}\) \\ \hline \multirow{8}{*}{**SAGE**} & Vanilla & \(62.09\pm 0.50\) & \(\mathbf{82.16\pm 0.33}\) \\ & PFR & \(56.75\pm 0.95\) & \(80.15\pm 0.67\) \\ & InFoRM & \(60.93\pm 2.59\) & \(79.05\pm 0.51\) \\ \cline{1-1} & REDRESS & \(61.46\pm 1.91\) & \(82.11\pm 0.52\) \\ \cline{1-1} & REDRESS + MLP & \(61.46\pm 1.36\) & \(81.35\pm 0.34\) \\ \cline{1-1} & GFairHint & \(62.26\pm 0.98\) & \(80.60\pm 0.98\) \\ \cline{1-1} & GFairHint + REDRESS & \(\mathbf{62.49\pm 4.86}\) & \(80.85\pm 1.21\) \\ \hline \multirow{8}{*}{**GAT**} & Vanilla & \(55.17\pm 0.81\) & \(73.68\pm 0.79\) \\ \cline{1-1} & PFR & \(54.06\pm 1.20\) & \(73.83\pm 0.90\) \\ \cline{1-1} & InFoRM & \(53.44\pm 2.24\) & \(71.38\pm 1.43\) \\ \cline{1-1} & REDRESS & \(53.55\pm 1.15\) & \(72.88\pm 0.74\) \\ \cline{1-1} & REDRESS + MLP & \(51.84\pm 0.42\) & \(72.08\pm 1.24\) \\ \cline{1-1} & GFairHint & \(64.04\pm 2.74\) & \(\mathbf{75.34\pm 0.74}\) \\ \cline{1-1} & GFairHint + REDRESS & \(\mathbf{65.30\pm 3.60}\) & \(74.94\pm 1.05\) \\ \hline \end{tabular} \end{table} Table 3. Node classification results on the Crime dataset. Consistency measures the fairness of the model. The number of layers and the hidden layer dimension of backbone GNN models are 2 and 16 respectively. All values are reported in percentage. Best results are in bold, and second-best results are underlined. From Table 4, we observe that the magnitude of importance score of utility node embedding and fairness hint for our framework is comparable, which indicates that GNNs in our method actually apply the fairness hint to predict the node labels. Moreover, the values of \(\frac{\text{Score}(u)}{\text{Score}(u^{f})}\) for GFairHint + REDBRESS are lower than the ones for GFairHint, suggesting that the fairness hint becomes more important when integrated with fairness loss. This further demonstrates that GFairHint is a plug-and-play framework as the fairness hint can be effectively utilized by the previous SoTA fairness promotion method with fairness regularization. ### Trade-off between Fairness and Utility GFairHint + REDBESS achieves the best fairness performance, where we integrate fairness hint with ranking-based loss. The value of the hyperparameter \(\gamma\) in Equation 9 controls the strength of the fairness constraint. There is a trade-off between utility and fairness when adjusting the \(\gamma\) value (Beng et al., 2015). To demonstrate the effectiveness of GFairHint, we perform experiments with multiple values of \(\gamma\) for the REDBESS and GFairHint + REDBESS models on the Arxiv dataset with GCN as the backbone GNN model. Figure 2 shows the trade-off between accuracy and fairness (NDCG@10) with varying values of \(\gamma\) for the REDBESS and GFairHint + REDBESS methods. The curves in the figure for REDBESS and GFairHint + REDBESS are generated by changing a range of fairness coefficients \(\gamma\) from 0.01 to 100 and computing the Pareto frontiers. We also further visualize the accuracy and NDCG@10 values for the vanilla and GFairHint models as two data points for reference. With the value of \(\gamma\) being small (e.g., 0.001), REDBESS and GFairHint + REDBESS models behave similarly to the vanilla and GFairHint models respectively as expected. When increasing the value of \(\gamma\), we can observe fairness improvements for both REDBESS and the GFairHint + REDBESS models. This fairness improvement is more significant for the GFairHint + REDBESS model. We conjecture that little improvement for the REDBESS model is due to the vanishing gradient problem of deep GNN models (Chen et al., 2016), which may reduce the impact of fairness loss. GFairHint + REDBESS model is less impacted because it also directly learns from the fairness hint that is incorporated into the final GNN layers. We observe that with the same accuracy level, our proposed GFairHint + REDBESS model achieves a higher NDCG@10 value than the REDBESS model, demonstrating the effectiveness of the fairness hint. We expect that the adjustment of the trade-off between fairness and utility can provide more flexibility in practical applications. For example, some tasks may pay more attention to fairness rather than utility. ### Efficiency Evaluation In addition to fairness and utility results, we compare the efficiency of GFairHint models with other baseline models in terms of time complexity and empirical training time. The time complexity of training a \(l\)-layer GCN model is \(\mathcal{O}(ln+l|e|)\), where \(|e|\) is the edge number and \(n\) is the node number (Beng et al., 2015). During the training of GNN model, the ranking-based loss in REDBESS requires finding a list containing top-k similar nodes for each node and ranking the list. The additional complexity of REDBESS is \(\mathcal{O}(n\cdot\log(n)\cdot k)\)(Beng et al., 2015) and our additional time complexity is \(\mathcal{O}(n)\) since we only add two MLP layers when training GNN models The additional cost introduced by the auxiliary link prediction task is small because the complexity is the same as the original node classification task and the learned fairness can be reused for other backbone GNN models on the same dataset, which reduces the marginal cost of this auxiliary task. Moreover, even if the fairness hint is only used once, the time required for training a link prediction GNN and a node classification GNN is significantly smaller than using the REDBESS framework. We empirically show the difference in training time in Figure 3 and the details in Appendix E. The results show that the additional cost of GFairHint is negligible compared to the Vanilla model. Therefore, the proposed GFairHint model is more scalable in practice when applied to large graph datasets. ## 6. Limitations and Future Work One crucial question for individual fairness is the source of similarity measures. External annotation is often impractical, subjective, and potentially biased, and the input feature can be an incomplete and imperfect source. It requires comprehensive domain knowledge to develop the fairness similarity measure for a specific real-world \begin{table} \begin{tabular}{l l c c c} \hline \hline **Dataset** & **Method** & **Score(\(o^{f}\))** & **Score(\(u\))** & \(\frac{\text{Score}(u)}{\text{Score}(u^{f})}\) \\ \hline \multirow{2}{*}{Arxiv} & GFairHint & 0.044 & 0.041 & 0.923 \\ & GFairHint+REDBESS & 0.051 & 0.034 & 0.677 \\ \hline \multirow{2}{*}{ACM} & GFairHint & 0.158 & 0.056 & 0.354 \\ & GFairHint+REDBESS & 0.245 & 0.045 & 0.184 \\ \hline \multirow{2}{*}{CS} & GFairHint & 0.157 & 0.164 & 1.050 \\ & GFairHint+REDBESS & 0.109 & 0.092 & 0.848 \\ \hline \multirow{2}{*}{Phy} & GFairHint & 0.169 & 0.182 & 1.073 \\ & GFairHint+REDBESS & 0.225 & 0.166 & 0.738 \\ \hline \multirow{2}{*}{Crime} & GFairHint & 0.065 & 0.228 & 3.536 \\ & GFairHint+REDBESS & 0.022 & 0.057 & 2.594 \\ \hline \hline \end{tabular} \end{table} Table 4. Average importance scores of utility node embedding and fairness hint for each dataset with GAT backbone model. We also report the ratio of average importance scores of two types of node embeddings. Figure 2. Pareto frontiers of various methods on Arxiv dataset. Upper-right corner (high accuracy, high NDCG@10) is preferable. Our method outperforms baseline methods significantly in the trade-off between fairness and utility. application. We note that our framework is compatible with different types of similarity measures as long as the construction of a fairness graph is viable. Moreover, there may be bias in the original input features, and we can have a sense of such bias by looking at the fairness evaluations of the vanilla models. However, it is unavoidable to use original information, as it is the only source of feature information available for making predictions. The intuition for promoting individual fairness is to use oracle individual fairness similarity to guide the model to utilize such fairness information, whereas in this paper we rely on the learned fairness hint. We acknowledge that there are works that debias the original information directly as pre-processing (Koshelev et al., 2019; Zhang et al., 2020), but they are not as empirically effective as the proposed method or the in-processing method (Koshelev et al., 2019) when applied along. Our method is also orthogonal to these pre-processing methods and is thus possible to integrate with them for better performance. Lastly, in this paper, we show the effectiveness of fairness hint with a simple concatenation strategy, and it is possible to develop more complex and better integration methods to utilize fairness hint, especially in compliance with the tasks and data formats at hand. We leave these directions for future work. ## 7. Conclusions In this work, we propose **GFairHint**, a plug-and-play framework for promoting individual fairness in GNNs via fairness hint. Our method learns fairness hint through an auxiliary link prediction task on a constructed fairness graph. The fairness graph can be derived from both continuous and binary oracle similarity matrix, corresponding to two ways of obtaining similarity for individual fairness respectively, i.e., from input feature space and from external human annotations. We also integrate GFairHint with another complementary individual fairness promotion method, REDRESS. We conduct extensive empirical evaluations on node classification tasks to show the effectiveness of our proposed method in achieving good balance in utility and fairness, with much less computational cost.
2303.11912
Deephys: Deep Electrophysiology, Debugging Neural Networks under Distribution Shifts
Deep Neural Networks (DNNs) often fail in out-of-distribution scenarios. In this paper, we introduce a tool to visualize and understand such failures. We draw inspiration from concepts from neural electrophysiology, which are based on inspecting the internal functioning of a neural networks by analyzing the feature tuning and invariances of individual units. Deep Electrophysiology, in short Deephys, provides insights of the DNN's failures in out-of-distribution scenarios by comparative visualization of the neural activity in in-distribution and out-of-distribution datasets. Deephys provides seamless analyses of individual neurons, individual images, and a set of set of images from a category, and it is capable of revealing failures due to the presence of spurious features and novel features. We substantiate the validity of the qualitative visualizations of Deephys thorough quantitative analyses using convolutional and transformers architectures, in several datasets and distribution shifts (namely, colored MNIST, CIFAR-10 and ImageNet).
Anirban Sarkar, Matthew Groth, Ian Mason, Tomotake Sasaki, Xavier Boix
2023-03-17T21:13:41Z
http://arxiv.org/abs/2303.11912v1
# Deephys: Deep Electrophysiology ###### Abstract Deep Neural Networks (DNNs) often fail in out-of-distribution scenarios. In this paper, we introduce a tool to visualize and understand such failures. We draw inspiration from concepts from neural electrophysiology, which are based on inspecting the internal functioning of a neural networks by analyzing the feature tuning and invariances of individual units. Deep Electrophysiology, in short Deephys, provides insights of the DNN's failures in out-of-distribution scenarios by comparative visualization of the neural activity in in-distribution and out-of-distribution datasets. Deephys provides seamless analyses of individual neurons, individual images, and a set of set of images from a category, and it is capable of revealing failures due to the presence of spurious features and novel features. We substantiate the validity of the qualitative visualizations of Deephys thorough quantitative analyses using convolutional and transformers architectures, in several datasets and distribution shifts (namely, colored MNIST, CIFAR-10 and ImageNet). ## 1 Introduction Deep Neural Networks (DNNs) do not generalize well to different distributions from the training data distribution. The internal DNN mechanisms that lead to out-of-distribution (OOD) failures remains largely unknown. Understanding such mechanisms is a key question for accountability, safety and fairness of intelligent systems. A promising strand of research for understanding the internal DNNs' decision-making mechanisms that yet has to be applied to understand OOD failures, is based on interpreting the neural activity of individual DNN's neurons. It is well known that neurons are tuned to detect features present in the training data (Zeiler and Fergus, 2014; Bau et al., 2020; Elhage et al., 2022). That is, neurons are selective to a feature as the output value of the neuron is only high when the feature is present in the image. Also, such selectivity is invariant to some nuisance factors in the data, and invariance is at the core of generalization. The emergence of selectivity and invariant neural activity is a well-known phenomenon for both biological and artificial neurons and has already been shown to play a key role for OOD generalization (Anselmi et al., 2016; Sinha and Poggio, 1996; Ullman, 2000). Previous works studied individual neurons in great detail (Zeiler and Fergus, 2014; Bau et al., 2020), but they are mostly targeted to understanding model behaviors with InD. We extend the effort to examine neurons, for InD as well as OOD, through a neural tuning based explainability technique for visualization, developed over previous efforts (Koh et al., 2020; Sarkar et al., 2022), that can generate global explanations, and is effective for OOD detection (Choi et al., 2022). When presented with out-of-distribution (OOD) input, neurons may exhibit atypical behaviours which correspond to the model's degraded performance. Therefore, insights can be gained at the neural level into the causes, and possible solutions, for failures under distribution shift. Following this hypotheses, we resort to interpreting model behaviours through understanding patterns in the neural tuning for different data distributions, and investigating functional disparity of the neurons to gain insights about biases present in the datasets. While the prior works, conducted with a controlled set of features or dataset biases, establish new phenomena, they are not applicable for natural datasets due to unavailability of apriori bias factors. Such features can be visible yet not easily quantifiable, in contrast to simulated datasets, where the images are generated with proper calculation of the biases. Here we concentrate on realizing these biases through visualizing difference in behavioural response of individual neurons for different data distributions. Such ex ploration provides enlightening insights about how tuning of every neuron changes corresponding to the change in biases from InD to OOD. In this paper, we focus on explaining distribution shifts through proposing multiple logically connected strategies. They provide meaningful interpretations of inconsistent model behaviors through a new visualization technique, that can conceptualize the distinguishing factors of out-of-distribution (OOD) data compared to in-distribution (InD) data. Such differentiating traits are also captured by our proposed novel quantitative metrics, that corroborate our study and complement our findings about the model. Borrowing the terminology '_Electrophysiology_' from neuroscience, which is a study of electrical signals in biological cells and tissues, here we propose a toolkit to study neuron activation signals of deep networks under distribution shift. The idea to look at the response patterns of a single neuron as closely as possible inspired the name '_Deep Electrophysiology_', in short '_Deeply_'. ## 2 Results Here we present detailed insights about each step of our proposed methodology and their contributions towards achieving the broader goal of demystifying model failures under distribution shift. The steps include analysis of individual neurons to understand feature selectivity and invariance, investigating activity per image as well as a set specifically selected images. We also explain how _Deeply_s can leverage such analyses, generated both for InD and OOD, seamlessly through concept-based visualization technique. We consider examples from Colored MNIST (added separate color to each MNIST category) and ImageNet, with their OOD versions (Permuted Colored MNIST and ImageNet Sketch respectively) to explain different steps of our approach, that are shown in Fig.1 and 2. Please refer to Sec.4 for explanations about the datasets and the networks used for our experimentation. We conclude this section with specific examples from all datasets (including OOD versions), that are used for our experimentation, to show effectiveness of our proposed approach. It is worth mentioning that investigating any layer of a model can provide important insights about it's behavior, and can be performed through _Deeply_s. Yet, we only consider the penultimate layer for this study due to the closest proximity to the decision layer, that may reveal the most differentiating traits at the neuron level. ### Visualizing Feature Selectivity by exhibiting most activated images for neurons Previous studies have shown that neurons can be considered as feature detectors (Ellage et al., 2022), which can be any concept or artifact of the data i.e. color, shape, texture etc. with a specific human understanding. Each image possibly consists of many such concepts and visualizing the top activated images for a neuron provides a sense of the feature selectivity of the neuron. It is also possible for some neurons to learn superimposed features, leading to interference among them, that make them less or completely uninterpretable for a human observer. Our visualizations also suggest similar behavior as there are some neurons for which the top activated images may not seemingly share any particular feature. Moreover, while some features of a complex natural dataset can not be easily named, they may be intuitively picked up by looking at the top activated images. Here we allow the human observer to visualize the neuron's response patterns and interpret what patterns require a closer examination. The 'InD' heads of Fig.1(a) show the most activated images for 3 sample neurons, from the penultimate layer of a ResNet18 model, for **InD** (i.e. ImageNet). They demonstrate the selectivity of the neurons to features such as animals or birds in water (neuron 7), ancient buildings or architectures (neuron 15), and hourglass or similar looking objects (neuron 30). We add other examples for Colored MNIST (separate colors to each MNIST category) as **InD** and Permuted Colored MNIST (keep same colors and change the color-category association by random permutation of colors) as **OOD** for better explanation. More details about different versions of Colored MNIST are provided in Sec.4. The 'InD' heads of Fig.1(b) exhibits that the neurons are tuned to pink '5' (neurons 2, 34) or green '0' (neuron 4). ### Visualizing Invariance by comparing most activated Images for InD and OoD While visualizing neural responses to InD data is beneficial towards understanding feature selectivity, following the same procedure for OoD can enlighten us about invariance or the artifacts, that do not affect the neurons to respond. Therefore, generating comparative visualization of the neurons, both for InD and OoD, based on the same concept-based explainability technique can lead to in-depth exploration of invariance through different bias factors of the datasets. Our proposed method can produce such synchronous visualization for any neuron from the model and allows the observer to investigate their behavioral change and capture a broad understanding about their invariance. As the neurons are tuned to patterns originating from InD data, they respond on encountering similar patterns in OOD data. The activation ratio of a neuron for OOD data compared to InD data (considering only the highest activation scores for both the datasets) can reflect the similarity of features. Also, some neurons can be tuned to spurious features in InD data. When the spurious features are associated to different categories in OOD data, the neurons can become selective to these categories. These phenomenon can be observed by our idea of comparative visualization of neurons. Fig.1 exhibits the most activated images, for the same neurons as presented in the last section, for **OOD** (i.e. ImageNet Sketch). This shows that the neurons are selective to very similar artifacts from OOD data as in InD data i.e. birds in water (neuron 7), ancient buildings (neuron 15), and hourglass or similar objects or animal body parts (neuron 30), but are invariant to colors. The neural activation ratio for each neuron (considering only the highest activations of that neuron for OOD compared to InD) represents existence of a feature in OOD data with respect to InD data. Here, neurons 15 and 30 show high feature similarity (86.6% and 81.6% respectively), but neuron 7 observes less similarity (61.4%) between OOD and InD. Also Fig.1 shows that neuron 2 is selective to shape of the digit with 71.3% activation ratio, where as neurons 4, 34 are tuned to spurious feature 'color' and selective to different categories with even higher activation ratios i.e. 91% and 83.5% respectively. ### Analyzing Activity per Image In addition to investigating the behaviour of individual neurons, it is also essential to understand the responses of them for individual images in different datasets. The neural responses are normalized for every neuron (i.e. dividing the activations of all the images by the highest score for that neuron) for InD, that brings all the normalized activations between 0 and 1. Please note that we consider only positive neural activations (by applying _ReLu_) before applying a normalization. For a selected image, the neurons are identified by their descending normalized activations, that are mostly activated for the image. Such normalized scores are attached with the neurons, that reflects their individual importance for the image. For complete understanding, we follow the same concept-based approach of explaining a single neuron as explained for the previous steps, and equip every neuron in this 'per **image**' view with such visualizations. Such visualizations can provide insights about perception of a model for an image in terms of interpreting individual neurons that are important for the image. Generally, one or a few of the top most activated neurons are expected to be similarly tuned to the features of the selected image. These top neurons can be individually tuned to different patterns of the image based on the complexity and coexistence of dissimilar features in the image. As the purpose of this study is to compare model behavior for OOD to InD, we follow the same normalization procedure here through dividing all the activations of OOD images of a neuron by the highest score of that neuron for InD (i.e. neural response ratio). For a selected InD image, we include visualizations of the same set of neurons as InD, with the neural response ratio and the top activated Figure 1: **Feature selectivity and Invariance per neuron.** (a) The **‘Neuron’** view allows visualizing the neural tuning for every neuron along with the highest activation ratio for OOD. Three random neurons (7, 15 and 30) from penultimate layer are visualized here, indicating that they fire for animals or birds in water, ancient buildings or architectures, and hourglass or similar objects, but are invariant to colors. The firing intensity of these neurons are shown by activation ratios 61.4%, 86.6% and 81.6% for OOD compared to InD. (b) Considering random neurons for a different model with Colored MNIST (InD) and Permuted Colored MNIST (OOD), we observe tuning to shape of the digit with 71.3% activation ratio (neuron 2), where as neurons 4, 34 are tuned to spurious feature ’color’ and selective to different categories with even higher activation ratios i.e. 91% and 83.5% respectively. images from OOD for every neuron. On the contrary, it is also possible to select an image from OOD and generate a similar comparative visualization. We always calculate the normalization with respect to InD, but here the highest activated neurons are identified for the OOD image accompanied by the concept-based visualization, and the same order of neurons are followed for such visualizations with InD. Such visualization demonstrates overall behaviour of a model by explaining individual images from OOD and helps to identify inconsistent performance through explaining the change in response of most important neurons for the selected image. Part (a) of Fig.2 exhibits the capability of _Deephys_ to select an image from OOD and visualize the corresponding top activated neurons while comparing their behavior of the same set of neurons from InD. Following the interest in understanding failure cases from OOD, two sample images are considered for analysis from ImageNet Sketch (i.e. OOD). For the 'Brain coral' image (id 5583), predicted as 'Wool' by the model, the most activated neurons (i.e. 185, 477, 86,..) are selective to features seemingly similar to 'Wool', as evident from visualizations of the same neurons for InD. Similarly, for the 'Soccer ball' image (id 40983), predicted as 'Honeycomb', the most activated neurons (i.e. 307, 243 and 300) are tuned to features like honeycomb. ### Analyzing Category-wise Activity Explaining the unreliable behaviour of the model against OOD data may also require interpreting model responses for a set of specific set of images. We consider either all the Figure 2: **(a) Activity per Image.** Any image, either from InD or OOD, can be examined by the ‘**Image’** view through a comparative visualization of the most activated neurons for the selected image. Specifically, this helps explaining failure cases from OOD, such as a ‘Brain coral’ image (id 5583) is predicted as ‘Wool’, and the most activated neurons (185, 477, 86) for this image fire for ‘Wool’ category images, as evident from the neural behaviour for InD. Similar explanations for the second ‘Soccer ball’ image (id 40983) are shown with neurons 307, 243 and 300. **(b) Activity per Category Confusion.** Images from a specific category or a pair of categories, where images from one category are predicted as another category, are investigated in the ‘**Category’** view through similar visualization as (a), but by picking the most activated neurons for a set of images under consideration. We have shown examples of ‘Maze’ confused with ‘Doormat’ with top neurons (320, 97, 64) and vice versa (120, 300, 320). images of a category or the set of images of a category, that confuse the model to predict another category, to be of great interest towards our goal. Such visualization is generated through the similar calculation as in the previous step, but the top neurons are calculated considering all the images of the selected set. Here, the neuron activations are averaged for all the images and presented in a descending order. Part (b) of fig.2 explains similar visualization with a set of images considered in this part instead of to a single image from OOD. More specifically, the set of images are confusing for a pair of categories, which is of particular interest towards understanding overall model failure. In order for an image to qualify as being part of the confusion between two categories, it must part of at least one instance of a false positive or false negative miss-classification with the other image. Here, analyzing the top neurons for a set of images, with ground truth _Maze_ and model prediction _Doormat_ holding together, can explain the reason behind such actions by the model, as these neurons are mostly tuned to 'Doormat' or objects with very similar attributes. ### Designing Deephys Software As a single biological neuron's firing properties in response to different stimuli are rigorously compared in an electrophysiological study, Deephys' comparative analyses with artificial neural responses can enlighten the internal mechanisms about the black box of deep learning and generate insights into how or why a model has dysfunction in OOD Figure 3: Analysis of (a) new features and (c) spurious features in different OOD version of ‘Colored MNIST’ (InD) i.e. ‘Permuted Colored MNIST’, ‘Arbitrary Colored MNIST’ and ‘Noisy Colored MNIST’. One example is considered representing each neural property in OOD i.e. new features ((b) neuron 21) and spurious features ((d) neuron 39). Failure cases from OOD versions are investigated with compartive visualization of most activated neurons of OOD as well as InD ((e) neurons 30, 12, 36), and (f) neurons 11, 14, 21). cases. _Deephys_ is an application, that is designed to provide a concrete guidance through its four main steps, to understand behavioral change of any deep model under distribution shift. Even though we have experimented with object recognition models in this work, _Deephys_ can be applied to models specialized to other type of tasks i.e. object detection or segmentation. An user should only provide the neural activations from any layer of a deep architecture, generated for different datasets according to the user's requirement, and the final layer logits (i.e. classification layer) to _Deephys_ for the analysis. These should also accompany images from all the selected datasets with their ground truth information as well as model predictions for complete visualizations. The visualizations are spanned over multiple non-sequential steps consisting of individual neurons from the selected deep model, any image or a set of images, related to a specific category or a pair of categories confusing the model, from the datasets. The users are expected to switch to the steps back and forth for complete understanding of the model functioning and it's inconsistent performance. The steps of _Deephys_ are designed to provide easy navigation among them and efficient inspection of informative points for the users. _Deephys_ provides concept-based explanations with neural responses by generating visualizations containing the top images from a dataset that maximally activate the neuron. Such comparative visualizations can be efficiently generated through _Deephys_ with the flexibility to increase the number of top images for every neuron, that suits an user's choice. Such visualization provides enriched insights about feature selectivity of the neurons and the model's perception of a dataset as a whole. We also add possibility of explorations for every image through _Deephys_ in a meaningful way for the human observers. Please note that we consider only positive neural activations (by applying _ReLu_) before transferring them to _Deephys_. Continuing the theme of comparative understanding between InD and OOD from the last step, we provide the similar 'per **image**' visualization for OOD in _Deephys_. Please note that _Deephys_ allows an user to generate such comparative visualization including the activation normalization and activation ratio calculation, rather than interpreting with a single dataset, based on requirement towards the final goal. _Deephys_ also allows the human observer to investigate a set of images for any category or any pair of categories following the explanation generation techniques shown in the last section. All these investigations are supported by quantitative metrics to measure feature novelty and spuriousness. ### Analyzing Colored MNIST With the purpose of investigating neural selectivity and their invariance to bias factors, we design a dataset by associating a color to every MNIST category. We are also interested to analyze the behavioral change of the neurons, where we alter the association in meaningful ways with corresponding their effects in model performance. Colors are added to each MNIST category such that all the images of a category Figure 4: Analysis of (a) new features and (c) spurious features in the OOD version of ‘CIFAR10’ (InD) i.e. ‘CIFAR10.2’. One example is considered representing each neural property in OOD i.e. new features ((b) neuron 25) and spurious features ((d) neuron 1). (e) Confusing images are observed for categories ‘Plane’ and ‘Dog’, both for InD and OOD, to discover multiple cases of mislabelling and difficult images in OOD compared to InD. are given a color and a different color is given to every category. We train a deep model with this dataset and hence consider this (Colored MNIST) as InD. We designed OOD versions by permuting the original colours to the categories (Permuted Colored MNIST), adding a new set of arbitrary colors (Arbitrary Colored MNIST), and adding a constant shift to all the original colors (Drifted Colored MNIST). Please refer Sec 4 for details on different versions of Colored MNIST. Fig.3(a) and (b) represent feature novelty in OOD versions compared to InD, that reveals neural tuning is mostly governed by the colors compared to shpae of the digits, and selectivity of the neuron change based on the colors. Adding constant shift to the colors show very small novelty as well as spuriousness, but arbitrary colors manifest high novelty and medium spuriousness. On the other hand, permuting the colors doesn't change the colors as a whole in OOD and neurons still fire for them associated with different categories, showing less novelty and high spuriousness. Fig.3(b) and (d) show the similar traits for two sample neurons with top images from all the datasets. Neuron 21 fires for the same 'dark yellow' color with less novelty (maximum activation ratio 83.6%) compared to higher novelty (maximum activation ratio 23.4%) for new color (from arbitrary set of colors). Neuron 39, on the other hand, is more tuned to 'dark green' color and selective to different categories, both for Permuted and Arbitrary Colored MNIST (color similarity of 'blue' to 'green' and shape similarity of specific types of '0' and '5' in Fig.3(d)), causing high spuriousness. Both the neurons show low novelty and spuriousness for Drifted Colored MNIST. We presented two sample examples of mispredictions by the model, each from Permuted and Drifted Colored MNIST, with visualizations of the most activated neurons from the same dataset accompanied by visualizations of the same neurons from InD. This explains the behavior of the model about wrong predictions, as '0' from Permuted Colored MNIST is predicted as '2' due to neural tuning to 'orange' color, but similarity of specific '2' shapes adds a little edge (57.3% normalized activation of neuron 11) over similarity Figure 5: Analysis of (a) new features and (c) spurious features in different OOD version of ‘ImageNet’ (InD) i.e. ‘ImageNetV2’, ‘ImageNet Sketch’ and ‘Stylized ImageNet’. One example is considered representing each neural property in OOD i.e. new features ((b) neuron 33) and spurious features ((d) neuron 177). One failure case from ‘Stylized ImageNet’ is investigated with comparative visualization of most activated neurons of OOD as well as InD ((e) neurons 243, 63, 206). All these visualizations are generated with a ResNet18 model trained on ImageNet (InD). to specific '7' (52.1% normalized activation of neuron 21). It is also noteworthy that the confusion of the model regarding shape is observed through neuron 14 which is the second most activated neuron (52.7% normalized activation) and is tuned to specific shape related to '2' or '7'. ### Analyzing CIFAR10 CIFAR10 contains natural images of 10 categories. Similar strategies were adopted to collect images for CIFAR10.2 consisting of the same categories. Please refer Sec 4 for details on these versions of CIFAR10. Even through the images are visibly similar, CIFAR10.2 achieves 10% less performance compared to CIFAR10, which is very hard to explain. Given that the main purpose of our proposed methodology is to explain natural datasets, we are more interested on the outcome when investigated with CIFAR10 versions. Fig4(a) and (c) clearly explains that there is little novelty as well as the neurons are selective to very similar categories for both the datasets. Considering specific neuron examples show similar trend, i.e. neurons respond to almost identical features such as 'dog' (neuron 25) or 'plane' (neuron 1), which are shown in Fig4(b) and (d). But analysing with the category confusions exhibits the important aspect of many mislabelled and difficult images in CIFAR10.2 compared to CIFAR10. Fig4(e) represents confusing images for 'plane' and 'dog' from both the datasets. Considering nearly similar scenarios for other pair of categories, CIFAR10.2 fails to reach similar performance compared to CIFAR10. ### Analyzing ImageNet Here, we select ResNet18 architecture and OOD versions of ImageNet (InD) with similar categories, but lesser number of images (ImageNetV2), sketchy images (ImageNet Sketch) or images generated with added painting styles (Stylized ImageNet). Please refer Sec 4 for details on different versions of ImageNet. We call different versions of ImageNetV2, ImageNet Sketch and stylized ImageNet as V2, sketch and style in short. While V2 contains natural images and is expected to exhibit similar behavior as InD, it has less novelty and spuriousness as well. Sketch is missing color features, but contains shape and texture features in most of the images, which can be misleading for the model and lead to misprediction. This is reflected in medium novelty and high spuriousness. On the other hand, style has additional painting information that leads to high novelty, but little less spuriousness. This analysis can be seen in Fig5(a) and (c). We consider sample neurons to visually explain the plots in Fig5(b) and (d). Neuron 33 shows high activation ratio for V2 and sketch representing high feature similarity or less novelty (110% and 87.8% respectively), but less similarity or high novelty for style (67% ratio). Top images of neuron 177 show neural selectivity for very similar categories for V2 and little bit of style, but mostly different categories for sketch. We show one example of misprediction by the model, from style, with visualizations of the most activated neurons from the same dataset accompanied by visualizations of the same neurons from InD in Fig5(e). The most activated neuron for the selected 'Bottlecap' image (neuron 243 with 65% normalized activation) is tuned to images, that are mostly from 'Honeycomb' category in InD. As the other neurons show comparatively lesser importance (i.e. 55.3% and 54.8% normalized activation for neuron 63 and 206 respectively) for the selected image, the model decision i.e. 'Honeycomb' is mostly affected by neuron 243. Very similar phenomenon for novelty and spurious scores is observed for ResNet50 which is presented in Fig6(a) and (b). We also experimented with the same network, but trained with style training data and considered as InD. Similar plots for this experimental setting are presented in Fig6(c) and (d), which show lower novelty and spuriousness for style as expected. Even though V2 has slightly more novelty than style, selectivity to different categories reflect more spuriousness. Sketch exhibits highest novelty and spuriousness that explains existence of very different features and association to different categories compared to style. Figure 6: Analysis of (a) new features and (b) spurious features in different OOD version of ‘ImageNet’ (InD) i.e. ‘ImageNetV2’, ‘ImageNet Sketch’ and ‘Stylized ImageNet’, where the model is ResNet50. (c) and (d) show the similar analysis where the ResNet50 network is trained on ‘Stylized ImageNet’. We also experiment with a convolutional vision transformer (Wu et al., 2021) to understand if the neurons show any meaningful traits, similar to convolutional networks, and if analogous explanations as before can be generated for mispredictions in this case. Fig7(a) and (c) show very similar traits for novelty and spurious scores as ResNet18 and ResNet50 shown before. Here, experimenting with the neuron visualization indicates that every neuron is tuned to more than one artifacts. Fig7(b) visualizes sample neuron 31, that has highest novelty for style, but less novelty for V2 and sketch. Besides, the most activated images for neuron 6 in Fig7(d) shows high selectivity to similar categories for V2, compared to sketch and style. Fig7(e) shows an 'Electric locomotive' image, predicted by the model as 'Passenger car', as the most activated neuron responds to 'Passenger car' images from InD with high normalized activation (91.3%) compared to subsequent neurons (67 and 233, with normalized activations 60.4% and 57.3% respectively). ## 3 Discussion This ubiquitous phenomenon of nicely performing models failing under distribution shifts is well known to the community and has been an important research direction for many years now (Hendrycks and Dietterich, 2019; Szegedy et al., 2013; Recht et al., 2019). Multiple factors can trigger a distribution shift, such as a different data collection or preprocessing techniques, change in data source or environment (Koh et al., 2021) and noise or small corruptions infused in the input data (Geirhos et al., 2018). Prior works are mostly targeted for detecting distribution shifts (Rabanser et al., 2019; Quinonero-Candela et al., 2008; Kulinski et al., 2020), designing customized training techniques for improved generalization (Wang et al., 2020; Sun et al., 2020), or applying data augmentation to the input data (Cubuk et al., 2018; Hendrycks and Dietterich, 2019; Hendrycks et al., 2019; Rusak et al., 2020; Schneider et al., 2020; Lopes et al., 2019). Here, we focus on explaining distribution shifts through investigating the change in neural responses under such shifts with the purpose of making meaningful interpretations about Figure 7: Analysis of (a) new features and (c) spurious features in different OOD version of ‘ImageNet’ (InD) i.e. ‘ImageNetV2’, ‘ImageNet Sketch’ and ‘Stylized ImageNet’. One example is considered representing each neural property in OOD i.e. new features ((b) neuron 31) and spurious features ((d) neuron 6). One failure case from ‘Stylized ImageNet’ is investigated with comparative visualization of most activated neurons of OOD as well as InD ((e) neurons 51, 67, 233). All these visualizations are generated with a Convolutional Vision Transformer (CVT) model trained on ImageNet (InD). inconsistent model behavior. Selectivity to a category or invariance to some features are two common facts, observed both for biological and artificial neurons, that are shown connected to OOD generalization (Anselmi et al., 2016; Sinha and Poggio, 1996; Ullman, 2000). While the factors of dataset bias are considered mostly a constrained set of orientations and illumination conditions (Alcorn et al., 2019), deep networks are shown to overcome such biases by transferring the knowledge from richer set of conditions to a biased conditions for an object (Madan et al., 2022; Zaidi et al., 2020). A recent study showed that encouraging invariant neural representations can lead to improved OOD generalization for deep models (Sakai et al., 2022). Inability of all these approaches to be applied to natural datasets, due to absence of bias factors, instigated in suggesting a method to observe such biases by visualizing the change in individual neural responses. Human intuition can play a critical role in identifying dataset biases that are otherwise not obviously quantifiable. It is possible to quantify after diligent attention given to them, but without the original intuition they may never have been found in the first place. Despite being popular, but mainly a local explanation generation methods, feature attribution and gradient based approaches (Sundararajan et al., 2017; Ribeiro et al., 2016) can be useful in interpreting failures from InD, where the same for OOD may not provide the most intuitive form of explanations for humans. Also, these generated explanations may not efficiently capture visual disparities between InD and OOD inputs (Adebayo et al., 2020). For this work, we consider concept-based explanations to interpret model behaviours in terms of concepts or artifacts that are more aligned to human understanding. Recent concept-based explainability techniques mainly focus on tweaking the object recognition based models to generate better explanation (Koh et al., 2020; Sarkar et al., 2022), or explain the decision of OOD detectors (Choi et al., 2022). Some of the previous efforts tried to improve model generalization capability by minimizing the dependency on spurious features or improving the effect of other non-spurious features. (Arjovsky et al., 2019) proposed a new model training approach that reduces the statistical association to spurious features by encouraging model invariance to them. (Eastwood et al., 2021) showed that restoring important features from InD and extracting features with the same semantics from OOD can improve model performance against OOD. Our approach, on the other hand, follows concept-based visualizations to identify the features learnt by individual neurons, both for InD and OOD datasets. We leverage such comparative visualizations to investigate spuriousness of the features in neuron level and elucidate the cause of abnormal model behaviors. ## 4 Methods ### Datasets A very specific set of datasets are considered for the experiments to exhibit the effectiveness of Deephys. Even though we claim that Deephys can be applied for datasets with natural images, we are also interested in datasets with manually added bias factors. We present the datasets below and show how Deephys can generate insights about failure cases of a model. #### 4.1.1 Colored MNIST MNIST is a handwritten digit dataset comprising of single channel images from 10 categories, 0 to 9. Different colors are added to each MNIST category such that all the images of a category are given a color and a different color is given to every category. We train a deep model with this dataset and hence consider this (Colored MNIST) as InD. Similar idea is followed for the OOD versions, but they differ in the set of colors that are selected for each category. We generate three versions by permuting the original colours to the categories (Permuted Colored MNIST), adding a new set of arbitrary colors (Arbitrary Colored MNIST), and adding a constant shift to all the original colors (Drifted Colored MNIST). Please refer to part (a) of Fig.8 for example images from all the InD and OOD versions of Colored MNIST. #### 4.1.2 CIFAR-10 Versions CIFAR10 contains natural images of 10 categories. Similar strategies were adopted to collect images for CIFAR10.2 consisting of the same categories. Sample images from all the categories of both the datasets are shown in part (b) of Fig.8. #### 4.1.3 ImageNet Versions ImageNet is larger and more complex dataset, compared to CIFAR10, containing natural images from 1000 categories. All the OOD versions of ImageNet have the same categories, but contain lesser number of images (ImageNetV2), sketchy images (ImageNet Sketch) or images generated with added painting styles (Stylized ImageNet). Sample images from all ImageNet and the OOD versions are presented in part (c) of Fig.8. ### Network Architectures We considered a small 3 layer convolutional network for experiments with Colored MNIST and studied the penultimate layer with 50 neurons for our analyses. For CIFAR10, we considered a ResNet18 architecture, that has 512 neurons in the penultimate layer. With the purpose of generating more human understandable explanations with a dataset containing only 10 categories, we added a fully connected layer, with 50 neurons, between the penultimate layer and the classification layer of the ResNet18 architecture. ImageNet pretrained ResNet18 and ResNet50 architectures are considered for all our experiments related to ImageNet and it's OOD versions. We also studied model behaviour with a convolutional vision transformer (Wu et al., 2021) to check if similar traits are observed here. We consider output of the last convolutional transformer block (before the classification layer) and averaging through the spatial dimension is considered to provide activation scores for every neuron. ### Analysing the OOD Failures through Neural Activity Comparison The different visualization-based toolsets of deephys, which are a way to realize the behavioral disparities of the model between InD and OOD, are mainly based on the neural activations of a model captured with all the datasets. Considering that neurons of a deep model responds to specific artifacts from InD, their responses change based on similarity of the features in OOD. We are not able to quantity such similarity of features across datasets, but can we make estimations based on neural responses for them. Here, we analyze the activations through different quantitative measures, which are governed by our hypothesis that there are mainly two possible scenarios where the model neurons can manifest different behaviours. These scenarios can trigger a change in selectivity or invariance of the neurons and, in turn, degrade model performance under distribution shift. Such quantitative analyses provides a global picture of neural response comparison between InD and OOD, that can complement the understanding about these datasets acquired through deephys. ### Novel features in OOD As feature detectors, the neurons respond to patterns originating from InD. They respond When they encounter similar patterns in OOD, but it depends on the quantity of the feature similarity which can be measured by the difference in the firing rates for InD and OOD. We present a metric, '**novelty score**' that can indicate the originality found by Figure 8: Examples of all the InD and OOD datasets considered for experiments. (a) Different colors are added to each MNIST category and then used to train a model. Other OOD versions are generated by permuting the colours (Permuted Colored MNIST), adding arbitrary colors (Arbitrary Colored MNIST), and adding a constant shift to all the original colors (Drifted Colored MNIST). (b) CIFAR10 contains natural images of 10 categories. Similar strategies were adopted to collect images for CIFAR10.2 consisting of the same categories. (c) ImageNet is comparatively much larger and more complex dataset with natural images from 1000 categories. All the OOD versions of ImageNet have the same categories, but contain lesser number of images (ImageNetV2), sketch images (ImageNet Sketch) or images generated with added painting styles (Stylized ImageNet). neurons in OOD compared to InD. We calculate the average category-wise activations for every neuron and select the highest average scores i.e. the highest average activations of the neurons for any category. Please note that the category is not important here, rather the highest response for any category is studied. The score for OOD is then subtracted from the score for InD for every neuron, that provides novelty score for them. We only consider the neurons whose subtracted score is positive i.e. that signifies some novel information found by a neuron in OOD, which is not present in InD. A density plot is drawn with these final values and presented as novelty score for OOD with respect to InD. ### Spurious features in InD Neurons, that are tuned to a spurious feature, can still be activated by the same feature in the OOD. When the spurious feature is associated to a different categories in the OOD, the neuron shows high response to the spurious feature but also become selective to the other category. We devise a metric, '**spurious score**' to capture the said effect. Here we apply spearman correlation between the average category-wise activations, calculated in the same way as for the 'novelty score', of OOD and InD. Absolute values of the correlation scores are subtracted from unity and plotted in a density plot to obtain'spurious score'. The purpose of such calculation is that a lesser score would represent less spuriousness i.e. nerons fire for similar categories with OOD as InD, and related explanation can be presented for high score. ## Author contributions AS and XB designed research with contributions of IM; AS performed experiments with contributions of XB; MG developed the app; MG and XB prepared the documentation and the website, with contributions of AS; AS wrote the paper with contributions of all the rest; XB and TS supervised the research. ## Data availability The code to generate Colored MNIST is contained in 'tutorials' sub-folder under the following GitHub repository: [https://github.com/mjgroth/deephys-aio](https://github.com/mjgroth/deephys-aio). All the other datasets (both InD and OOD versions) are publicly available. ## Code availability The website of the project is [https://deephys.org](https://deephys.org). The source code of Deephys Application is publicly available at the same GitHub repository. The code to reproduce the results, reported in this paper, is contained in 'tutorials' sub-folder under the repository. ## Acknowledgements We are grateful to Tomaso Poggio, Pawan Sinha, Serban Georgescu and Hisanao Akima for their insightful advice and warm encouragement. This work was supported by Fujitsu Limited (Contract No. 40009568). ## Conflicts of Interests Statement The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Fujitsu Limited funded this study (Contract No. 40009568) and also participated in the study through TS. All authors declare no other competing interests.
2305.11322
Knowing When to Stop: Delay-Adaptive Spiking Neural Network Classifiers with Reliability Guarantees
Spiking neural networks (SNNs) process time-series data via internal event-driven neural dynamics. The energy consumption of an SNN depends on the number of spikes exchanged between neurons over the course of the input presentation. Typically, decisions are produced after the entire input sequence has been processed. This results in latency and energy consumption levels that are fairly uniform across inputs. However, as explored in recent work, SNNs can produce an early decision when the SNN model is sufficiently ``confident'', adapting delay and energy consumption to the difficulty of each example. Existing techniques are based on heuristic measures of confidence that do not provide reliability guarantees, potentially exiting too early. In this paper, we introduce a novel delay-adaptive SNN-based inference methodology that, wrapping around any pre-trained SNN classifier, provides guaranteed reliability for the decisions produced at input-dependent stopping times. The approach, dubbed SpikeCP, leverages tools from conformal prediction (CP). It entails minimal complexity increase as compared to the underlying SNN, requiring only additional thresholding and counting operations at run time. SpikeCP is also extended to integrate a CP-aware training phase that targets delay performance. Variants of CP based on alternative confidence correction schemes, from Bonferroni to Simes, are explored, and extensive experiments are described using the MNIST-DVS data set, DVS128 Gesture dataset, and CIFAR-10 dataset.
Jiechen Chen, Sangwoo Park, Osvaldo Simeone
2023-05-18T22:11:04Z
http://arxiv.org/abs/2305.11322v4
# Knowing When to Stop: ###### Abstract Spiking neural networks (SNNs) process time-series data via internal event-driven neural dynamics whose energy consumption depends on the number of spikes exchanged between neurons over the course of the input presentation. Typically, decisions are produced after the entire input sequence has been processed, resulting in latency and energy consumption levels that are fairly uniform across inputs. However, as explored in recent work, SNNs can produce an early decision when the SNN model is sufficiently "confident", adapting delay and energy consumption to the difficulty of each example. Existing techniques are based on heuristic measures of confidence that do not provide reliability guarantees, potentially exiting too early. In this paper, we introduce a novel delay-adaptive SNN-based inference methodology that, wrapping around any pre-trained SNN classifier, provides guaranteed reliability for the decisions produced at input-dependent stopping times. The approach, dubbed _SpikeCP_, leverages tools from conformal prediction (CP), and it entails minimal complexity increase as compared to the underlying SNN, requiring only additional thresholding and counting operations at run time. SpikeCP is also extended to integrate a CP-aware training phase that targets delay performance. Variants of CP based on alternative confidence correction schemes, from Bonferroni to Simes, are explored, and extensive experiments are described using the MNIST-DVS data set. Spiking neural networks, conformal prediction, delay adaptivity, reliability, neuromorphic computing. ## I Introduction ### _Motivation_ Spiking neural networks (SNNs) have emerged as efficient models for the processing of time series data, particularly in settings characterized by sparse inputs [1]. SNNs implement recurrent, event-driven, neural dynamics whose energy consumption depends on the number of spikes exchanged between neurons over the course of the input presentation. As shown in Fig. 1(a), an SNN-based classifier processes input time series to produce spiking signals - one for each possible class - with the spiking rate of each output signal typically quantifying the _confidence_ the model has in the corresponding labels. Typically, decisions are produced after the entire input sequence has been processed, resulting in latency and energy consumption levels that are fairly uniform across inputs. The online operation of SNNs, along with their in-built adaptive measures of confidence derived from the output spikes, suggest an alternative operating principle, whereby inference latency and energy consumption are tailored to the difficulty of each example. Specifically, as proposed in [2, 3], _delay-adaptive_ SNN classifiers produce an _early decision_ when the SNN model is sufficiently confident. In practice, however, the confidence levels output by an SNN, even when adjusted with limited data as in [4], are not well _calibrated_, in the sense that they do not precisely reflect the underlying accuracy of the corresponding decisions (see Fig. 1). As a result, relying on its output confidence signals may cause the SNN to stop prematurely, failing to meet target accuracy levels. To illustrate this problem, Fig. 1(b) shows the test accuracy and confidence level (averaged over test inputs) that are produced by a pre-trained SNN for an image classification task (on the MNIST-DVS dataset [5]) as a function of time \(t\). It is observed that the SNN's classification decisions tend to be first _under-confident_ and then _over-confident_ with respect to the decision's ground-truth, unknown, test accuracy. Therefore, using the SNN's confidence levels to decide when to make a decision generally causes a _reliability gap_ between the true test accuracy and the target accuracy. This problem can be mitigated by relying on _calibration data_ to re-calibrate the SNN's confidence level, but only if one has enough calibration data [4] (see Sec. VI for experimental evidence, e.g., in Fig. 4). ### _SpikeCP_ In this paper, we introduce a novel delay-adaptive SNN solution that (_i_) provides _guaranteed_ reliability - and hence a zero (or non-positive) reliability gap; while (_ii_) supporting a tunable trade-off between latency and inference energy, on the one hand, and informativeness of the decision, on the other hand. The proposed method, referred to as _SpikeCP_, builds on _conformal prediction_ (CP), a statistical framework for calibration that is currently experiencing a surge of interest in the machine learning community [6, 7]. SpikeCP uses local or global information produced by the output layer of SNN model (see Fig. 1(a)), along with _calibration data_, to produce at each time \(t\) a _subset of labels_ as its decision. By the properties of CP, the predictive set produced by SpikeCP includes the ground-truth label with a target accuracy level at any stopping time. A stopping decision is then made by SpikeCP not based on a reliability requirement - which is always satisfied - but rather based on the desired size of the predicted set. As illustrated in Fig. 1, the desired set size provides a novel degree of freedom that can be used to control the trade-off between latency, or energy consumption, and _informativeness_ of the decision, as measured by the set size. SpikeCP wraps around any pre-trained SNN classifier, providing guaranteed reliability for the decisions produced at input-dependent stopping times. It does so with a minimal complexity increase as compared to the underlying SNN, requiring only additional thresholding and counting operations at run time. At a technical level, SpikeCP applies a Bonferroni correction of the target accuracy that scales with the number of possible stopping times in order to ensure a zero (or non-positive) reliability gap. Heuristics based on Simes correction [8, 9] are also explored via numerical results. The approach is finally extended to integrate a CP-aware training phase that targets minimization of the delay via a reduction of the average predicted set size. Unlike conventional training methods for SNNs [10], the proposed method adds an explicit regularizer that controls the average number of labels included in the predicted set. ### _Related Work_ _Training SNNs._ Typical training algorithms for SNNs are based on direct conversions from trained artificial neural networks [2], on heuristic local rules such as spike-timing-dependent plasticity (STDP) [11], or approximations of backpropagation-through-time that simplify credit assignment and address the non-differentiability of the spiking mechanism [1, 12]. Another approach that targets the direct training of SNNs is based on modelling the spiking mechanism as a stochastic process, which enables the use of likelihood-based methods [13], as well as of Bayesian rules [14]. SpikeCP works as a wrapper around any training scheme. _Calibration and delay-adaptivity for SNN._ Calibration is a subject of extensive research for artificial neural networks [15, 16] but is still an underexplored subject for SNNs. SNN calibration is carried out by leveraging a pre-trained ANN in [4]; while [14] applies Bayesian learning to reduce the calibration error. As discussed in the previous sections, adaptivity for rate decoding was studied in [2, 3]. Other forms of adaptivity may leverage _temporal decoding_, whereby, for instance, as soon as one output neuron spikes a decision is made [17]. _Early exit in conventional deep learning._ The idea of delay-adaptivity in SNNs is related to that of _early-exit_ decisions in feedforward neural networks. In neural networks with an early exit option, confidence levels are evaluated at intermediate layers, and a decision is made when the confidence level passes a threshold [18, 19, 20]. The role of calibration for early-exit neural networks was studied in [21]. _Prediction cascades._ Another related concept is that of prediction cascades, which apply a sequence of classifiers, ranging from light-weight to computationally expensive [22], to a static input. The goal is to apply the more expensive classifiers only when the difficulty of the input requires it. Fig. 1: (a) SNN \(C\)-class classification model: At time \(t\), real-valued discrete-time time-series data \(\mathbf{x}^{t}\) are fed to the input neurons of an SNN and processed by internal spiking neurons, whose spikes feed \(C\) readout neurons. Each output neuron \(c\in\{1,...,C\}\) evaluates the _local spike count_ variable \(r_{c}(\mathbf{x}^{t})\) by accumulating the number of spikes it produces. The spike rates may be aggregated across all output neurons to produce the _predictive probability vector_\(\{p_{c}(\mathbf{x}^{t})\}_{C=1}^{C}\). (b) Evolution of confidence and accuracy as a function of time \(t\) for a conventional pre-trained SNN. As illustrated, SNN classifiers tend to be first under-confident and then over-confident with respect to the true accuracy, which may cause a positive _reliability gap_, i.e., a shortfall in accuracy, when the confidence level is used as an inference-stopping criterion. (c) Evolution of the (test-averaged) predicted set size (normalized by the number of classes \(C=10\)) and of the set accuracy as a function of time \(t\) for the same pre-trained SNN when used in conjunction with the proposed SpikeCP method. The set accuracy is the probability that the true label lies inside the predicted set. It is observed that, irrespective of the stopping time, the set accuracy is always guaranteed to exceed the target accuracy level. Therefore, the inference-stopping criterion can be designed to control the trade-off between latency, and hence also energy consumption, and the size of the predicted set. The application of CP to prediction cascades was investigated in [8]. _CP-aware training._ CP provides a general methodology to turn a pre-trained probabilistic predictors into a reliable set predictor [7, 23]. Applications of CP range from healthcare [24] to control [25], large language models [26], and wireless systems [23]. References [27, 28, 29] have observed that the efficiency of the set-valued predictions produced via CP can be improved by training the underlying predictor in a _CP-aware_ manner that targets directly the predicted set size. Specifically, the authors of [27] propose to minimize a loss functions that penalizes large prediction set sizes when used in conjunction with CP. It explored strategies to differentiate through CP during training with the goal of training model, with the conformal wrapper end-to-end. Related work in [29] has leveraged differentiation through CP to design meta-learning strategies targeting the predictive set size (see also [30]). To the best of our knowledge, no prior work has applied the idea of CP-aware training to the design of delay-adaptive classifiers. ### _Main Contributions and Paper Organization_ The main contributions of this paper are summarized as follows. \(\bullet\) We introduce _SpikeCP_, a novel inference framework that turns any pre-trained SNN into a reliable and delay-adaptive set predictor, irrespective of the quality of the pre-trained SNN and of the number of calibration points. The performance of the pre-trained SNN determines the achievable trade-off curve between latency and energy efficiency, on the one hand, and informativeness of the decision, as measured by the set size, on the other. SpikeCP requires minimal changes to the underlying SNN, adding only counting and thresholding operations. Furthermore, it can be implemented using different measures of confidence at the output of the SNN, such as spiking rates and softmax-modulated signals. \(\bullet\)_Theoretical guarantees_ are proved by leveraging a modification of the confidence levels based on Bonferroni correction [31]. Heuristic alternatives based on Simes correction are also considered [32]. \(\bullet\) In order to improve the performance in terms of attainable trade-offs between delay/energy consumption and predictive set sizes, we introduce a _SpikeCP-aware training_ strategy that targets directly the performance of the SNN when used in conjunction with SpikeCP. The approach is based on regularizing the classical cross-entropy loss [33, 34] with a differentiable approximation of the predicted set size. \(\bullet\) Extensive numerical results are provided that demonstrate the advantages of the proposed SpikeCP algorithms over conventional point predictors in terms of reliability, latency, and energy consumption metrics. The remainder of the paper is organized as follows. Section II presents the multi-class classification problem via SNNs. Adaptive point classification schemes are reviewed for reference in Section III. The SpikeCP algorithm is proposed in Section IV, while Section V presents a training strategy that targets directly the performance of the SNN when used in conjunction with SpikeCP. Experimental setting and results are described in Section VI. Finally, Section VII concludes the paper. ## II Problem Definition In this paper, we consider the problem of efficiently and reliably classifying time series data via SNNs by integrating adaptive-latency decision rules [2, 3] with CP [6, 7]. The proposed scheme, SpikeCP, produces _adaptive SNN-based set classifiers_ with _formal reliability guarantees_. In this section, we start by defining the problem under study, along with the main performance metrics of interest, namely reliability, latency, and inference energy. We also review the conventional model of SNNs adopted in this study that is based on leaky integrate-and-fire (LIF) neurons [35]. ### _Multi-Class Time Series Classification_ We focus on the problem of classifying real-valued vector time series data \(\mathbf{x}=(\mathbf{x}_{1},...,\mathbf{x}_{T})\), with \(N\times 1\) vector samples \(\mathbf{x}_{t}\) over time index \(t=1,...,T\), into \(C\) classes, using _dynamic_ classifiers implemented via SNNs. As illustrated in Fig. 1(a), the SNN model has \(N\) input neurons, an arbitrary number of internal spiking neurons, and \(C\) output neurons in the readout layer. Each output neuron is associated with one of the \(C\) class labels in set \(\mathcal{C}=\{1,...,C\}\). At each time \(t\), the SNN takes as input the real-valued vector \(\mathbf{x}_{t}\), and produces sequentially the binary, "spiking", output vector \(\mathbf{y}_{t}=[y_{t,1},...,y_{t,C}]\) of size \(C\), with \(y_{t,c}\in\{0,1\}\), as a function of the samples \[\mathbf{x}^{t}=(\mathbf{x}_{1},...,\mathbf{x}_{t}), \tag{1}\] observed so far. Accordingly, if \(y_{t,c}=1\), output neuron \(c\in\mathcal{C}\) emits a spike, while, if \(y_{t,c}=0\), output neuron \(c\) is silent. Using conventional _rate decoding_, each output neuron \(c\in\mathcal{C}\) maintains the sum of spikes evaluated so far, i.e., \[r_{c}(\mathbf{x}^{t})=\sum_{t^{\prime}=1}^{t}y_{t^{\prime},c}, \tag{2}\] along the time axis \(t=1,...,T\). Each _spike count_ variable \(r_{c}(\mathbf{x}^{t})\) may be used as an estimate of the degree of confidence of the SNN in class \(c\) being the correct one. In order to obtain predictive probabilities, the spike count vector \(\mathbf{r}(\mathbf{x}^{t})=[r_{1}(\mathbf{x}^{t}),...,r_{C}(\mathbf{x}^{t})]\) can be passed through a softmax function to yield a probability for class \(c\) as \(p_{c}(\mathbf{x}^{t})=e^{r_{c}(\mathbf{x}^{t})}/\sum_{c^{\prime}=1}^{C}e^{r_{c^{\prime }}(\mathbf{x}^{t})}\) (see Fig. 1(a)). The resulting _predictive probability vector_ \[\mathbf{p}(\mathbf{x}^{t})=[p_{1}(\mathbf{x}^{t}),...,p_{C}(\mathbf{x}^{t})], \tag{3}\] quantifies the _normalized_ confidence levels of the classifier in each class \(c\) given the observations up to time \(t\). We emphasize that evaluating the vector (3) requires coordination among all output neurons, since each probability value \(p_{c}(\mathbf{x}^{t})\) depends on the spike counts of all output spiking neurons. A classifier is said to be _well calibrated_ if the confidence vector \(\mathbf{p}(\mathbf{x}^{t})\) provides a close approximation of the true, test, accuracy of each decision \(c\in\mathcal{C}\). Machine learning models based on deep learning are well known to be typically over-confident_, resulting in confidence vectors \(\mathbf{p}(\mathbf{x}^{t})\) that are excessively skewed towards a single class \(c\), dependent on the input \(\mathbf{x}^{t}\)[15, 36]. As discussed in Sec. I, SNN models also tend to provide over-confident decisions as time \(t\) increases. Following the conventional supervised learning formulation of the problem, multi-class time series classification data consist of pairs \((\mathbf{x},c)\) of input sequence \(\mathbf{x}\) and true class index \(c\in\mathcal{C}\). All data points are generated from a _ground-truth distribution_\(p(\mathbf{x},c)\) in an independent and identically distributed (i.i.d.) manner. We focus on _pre-trained_ SNN classification models, on which we make no assumptions in terms of accuracy or calibration. Furthermore, we assume the availability of a, typically small, _calibration data set_ \[\mathcal{D}^{\mathrm{cal}}=\{\mathbf{z}[i]=(\mathbf{x}[i],c[i])\}_{i=1}^{|\mathcal{D} ^{\mathrm{cal}}|}. \tag{4}\] In practice, a new calibration data set may be produced periodically at test time to be reused across multiple test points \((\mathbf{x},c)\)[2, 6, 7]. ### _Taxonomy of SNN Classifiers_ As detailed in Table I and Fig. 2, we distinguish SNN classifiers along two axes, namely adaptivity and decision type. _Adaptivity_: As shown in Fig. 2(a) and Fig. 2(c), a _non-adaptive_ classifier, having observed all the \(T\) samples of the input sequence \(\mathbf{x}\), makes a decision on the basis of the spike count vector \(\mathbf{r}(\mathbf{x}^{T})=\mathbf{r}(\mathbf{x})\) or of the predictive probability vector \(\mathbf{p}(\mathbf{x}^{T})=\mathbf{p}(\mathbf{x})\). In contrast, as seen in Fig. 2(b) and Fig. 2(d), an _adaptive_ classifier allows for the time \(T_{s}(\mathbf{x})\) at which a classification decision is produced, to be adapted to the difficulty of the input \(\mathbf{x}\). For any given input \(\mathbf{x}\), the _stopping time_\(T_{s}(\mathbf{x})\) and the final decision produced at time \(T_{s}(\mathbf{x})\) depend on either the spike count vector \(\mathbf{r}(\mathbf{x}^{t})\) or on the predictive distribution vector \(\mathbf{p}(\mathbf{x}^{t})\) produced by the SNN classifier after having observed the first \(t=T_{s}(\mathbf{x})\) input samples \(\mathbf{x}^{t}\) in (1). _Decision type_: As illustrated in Fig. 2(a) and Fig. 2(b), for any given input \(\mathbf{x}\), a conventional _point classifier_ produces as output a single estimate \(\hat{c}(\mathbf{x})\) of the label \(c\) in a non-adaptive (Fig. 2(a)) or adaptive (Fig. 2(b)) way. In contrast, as seen in Fig. 2(c) and Fig. 2(d), a _set classifier_ outputs a decision in the form of a _subset_\(\Gamma(\mathbf{x})\subseteq\mathcal{C}\) of the \(C\) classes [6, 7], with the decision being non-adaptive (Fig. 2(c)) or adaptive (Fig. 2(d)). The _predicted set_\(\Gamma(\mathbf{x})\) describes the classifier's estimate of the most likely candidate labels for input \(\mathbf{x}\). Accordingly, a predicted set \(\Gamma(\mathbf{x})\) with a larger cardinality \(|\Gamma(\mathbf{x})|\) is less _informative_ than one with a smaller (but non-zero) cardinality. ### _Reliability, Latency, and Inference Energy_ In this work, we study the performance of adaptive classifiers on the basis of the following metrics. _Reliability_: Given a _target accuracy level_\(p_{\mathrm{targ}}\in(0,1)\), an adaptive _point_ classifier is said to be _reliable_ if the accuracy of its decision is no smaller than the target level \(p_{\mathrm{targ}}\). This condition is stated as \[\Pr\bigl{(}c=\hat{c}(\mathbf{x})\bigr{)}\geq p_{\mathrm{targ}},\] \[\text{i.e., }\Delta R=p_{\mathrm{targ}}-\Pr\bigl{(}c=\hat{c}(\mathbf{x}) \bigr{)}\leq 0, \tag{5}\] where \(\hat{c}(\mathbf{x})\) is the decision made by the adaptive point classifier at time \(T_{s}(\mathbf{x})\) (see Fig. 2(b)). In (5), we have defined the _reliability gap_\(\Delta R\), which is positive for _unreliable_ classifiers and non-positive for _reliable_ ones (see Fig. 1(b)). In a similar manner, an adaptive _set_ predictor \(\Gamma(\mathbf{x})\) is reliable at the target accuracy level \(p_{\mathrm{targ}}\) if the true class \(c\) is included in the predicted set \(\Gamma(\mathbf{x})\), produced at the stopping time \(T_{s}(\mathbf{x})\), with probability no smaller than the desired accuracy level \(p_{\mathrm{targ}}\). This is written as \[\Pr\bigl{(}c\in\Gamma(\mathbf{x})\bigr{)}\geq p_{\mathrm{targ}},\] \[\text{i.e. }\Delta R=p_{\mathrm{targ}}-\Pr\bigl{(}c=\Gamma(\mathbf{x}) \bigr{)}\leq 0, \tag{6}\] where \(\Gamma(\mathbf{x})\) is the decision made by the adaptive set classifier at time \(T_{s}(\mathbf{x})\) (see Fig. 2(d)). The probabilities in (5) and (6) are taken over the distribution of the test data point \((\mathbf{x},c)\) and of the calibration data (4). _Latency_: Latency is defined as the average stopping time \(\mathbb{E}[T_{s}(\mathbf{x})]\), where the expectation is taken over the same distribution as for (5) and (6). _Inference energy_: As a proxy for the energy consumption of the SNN classifier at inference time, we follow the standard approach also adopted in, e.g., [33, 38], of counting the average number of spikes, denoted as \(\mathbb{E}[S(\mathbf{x})]\), that are produced internally by the SNN classifier prior to producing a decision. ### _Spiking Neural Network Model_ In this work, we adopt the standard LIF neural model known as _spike response model_ (SRM) [34]. Consider a set of spiking neurons indexed via integers in set \(\mathcal{K}\). Each spiking neuron \(k\in\mathcal{K}\) outputs a binary signal \(b_{k,t}\in\{0,1\}\) at time \(t=1,...,T\), with \(b_{k,t}=1\) representing the firing of the spike and \(b_{k,t}=0\) an idle neuron at time \(t\). It receives inputs from a subset of neurons \(\mathcal{N}_{k}\) through directed links, known as _synapses_. Accordingly, neurons in set \(\mathcal{N}_{k}\) are referred to as _pre-synaptic_ with respect to neuron \(k\); while neuron \(k\) is said to be _post-synaptic_ for any neuron \(j\in\mathcal{N}_{k}\). For a fully-connected layered SNN, as assumed in the experiments of this paper, the set of pre-synaptic neurons, \(\mathcal{N}_{k}\), for a neuron \(k\) in a given layer consists of the entire set of indices of the neurons in the previous layer. Following the SRM, each neuron \(k\) maintains an internal analog state variable \(o_{k,t}\), known as the _membrane potential_, over time \(t\). The membrane potential \(o_{k,t}\) evolves as the sum of the responses of the synapses to the incoming spikes produced by the pre-synaptic neurons, as well as of the response of the neuron itself to the spikes it produces. Mathematically, the evolution of the membrane potential is given as \[o_{k,t}=\sum_{j\in\mathcal{N}_{k}}w_{k,j}\cdot(\alpha_{t}*b_{j,t})+\beta_{t}*b_{ k,t}, \tag{7}\] where \(w_{k,j}\) is a learnable synaptic weight between neuron \(j\in\mathcal{N}_{k}\) and neuron \(k\); \(\alpha_{t}\) represents a filter applied to the spiking signals produced by each pre-synaptic neurons; \(\beta_{t}\) is the filter applied to its own spiking output; and "\(*\)" denotes the convolution operator. Typical choices for synaptic filters include the first-order feedback filter \(\beta_{t}=\exp(-t/\tau_{\mathrm{ref}})\), and the second-order synaptic filter \(\alpha_{t}=\exp(-t/\tau_{\mathrm{mem}})-\exp(-t/\tau_{\mathrm{syn}})\), for \(t=1,2,...\), with finite positive constants \(\tau_{\mathrm{ref}}\), \(\tau_{\mathrm{mem}}\), and \(\tau_{\mathrm{syn}}\)[12]. Each neuron \(k\) outputs a spike at time step \(t\) whenever its membrane potential crosses a fixed threshold \(\vartheta\), i.e., \[b_{k,t}=\Theta(o_{k,t}-\vartheta), \tag{8}\] where \(\Theta(\cdot)\) is the Heaviside step function. The synaptic weights \(w_{k,j}\) in (7) between any neurons \(k\in\mathcal{K}\) and the corresponding pre-synaptic neurons \(j\in\mathcal{N}_{k}\) constitute the model parameters to be optimized during training. Accordingly, we write as \(\mathbf{\theta}=\{\{w_{k,j}\}_{j\in\mathcal{N}_{k}}\}_{k\in\mathcal{K}}\) the vector of model parameters of the SNN. ## III Adaptive Point Classification In this section, we review, for reference, the adaptive point classifiers introduced in [2] and [3], which are referred to as _dynamic-confidence SNN_ (DC-SNN) and _stopping-policy SNN_ (SP-SNN), respectively. _DC-SNN_[2]: As illustrated in Fig. 2(b), DC-SNN produces a decision at the first time \(t\) for which the maximum confidence level across all possible classes is larger than a fixed _target confidence level_\(p_{\text{th}}\in(0,1)\). Accordingly, the stopping time is given by \[T_{s}(\mathbf{x})=\min_{t\in\{1,...,T\}}t\ \ \text{s.t.}\ \ \max_{c\in\mathcal{C}}p_{c}(\mathbf{x}^{t}) \geq p_{\text{th}}, \tag{9}\] if there is a time \(t<T\) that satisfies the constraint; and \(T_{s}(\mathbf{x})=T\) otherwise. The rationale for this approach is that, by (9), if \(T_{s}(\mathbf{x})<T\), the classifier has a confidence level no smaller than \(p_{\text{th}}\) on the decision \[\hat{c}(\mathbf{x})=\arg\max_{c\in\mathcal{C}}p_{c}(\mathbf{x}^{T_{s}(\mathbf{x})}). \tag{10}\] If the SNN classifier is _well calibrated_, the confidence level coincides with the true accuracy of the decision given by the class \(\arg\max_{c\in\mathcal{C}}p_{c}(\mathbf{x}^{t})\) at all times \(t\). Therefore, setting the target confidence level \(p_{\text{th}}\) to be equal to the target accuracy \(p_{\text{aug}}\), i.e., \(p_{\text{th}}=p_{\text{aug}}\), guarantees a zero, or negative, reliability gap for the adaptive decision (10) when \(T_{s}(\mathbf{x})<T\). However, as discussed in Sec. I, the assumption of calibration is typically not valid (see Fig. 1(b)). To address this problem, reference [2] introduced a solution based on the use of a calibration data set. Specifically, DC-SNN evaluates the empirical accuracy of the decision (10), i.e., \(\hat{\mathcal{A}}^{\text{cal}}(p_{\text{th}})=|\mathcal{D}^{\text{cal}}|^{-1} \sum_{i=1}^{|\mathcal{D}^{\text{cal}}|}\mathbb{1}\left(\hat{c}(\mathbf{x}[i])=c[i]\right)\), where \(\mathbb{1}(\cdot)\) is the indicator function, for a grid of possible values of the target confidence level \(p_{\text{th}}\). Then, it chooses the minimum value \(p_{\text{th}}\) that ensures the inequality \(\hat{\mathcal{A}}^{\text{cal}}(p_{\text{th}})\geq p_{\text{aug}}\), so that the calibration accuracy exceeds the target accuracy level \(p_{\text{aug}}\); or the smallest value \(p_{\text{th}}\) that maximizes \(\hat{\mathcal{A}}^{\text{cal}}(p_{\text{th}})\) if the constraint \(\hat{\mathcal{A}}^{\text{cal}}(p_{\text{th}})\geq p_{\text{aug}}\) cannot be met. _SP-SNN_[3]: SP-SNN defines a parameterized _policy_\(\mathbf{\pi}(\mathbf{x}|\mathbf{\phi})\), implemented using a separate artificial neural network (ANN), that maps the input sequence \(\mathbf{x}\) to a probability distribution \(\mathbf{\pi}(\mathbf{x}|\mathbf{\phi})=[\pi_{1}(\mathbf{x}|\mathbf{\phi}),...,\pi_{T}(\mathbf{x}|\mathbf{ \phi})]\) over the \(T\) time steps, where \(\mathbf{\phi}\) is the trainable parameter vector of the ANN. Accordingly, given input \(\mathbf{x}\), the stopping time is drawn using the policy \(\mathbf{\pi}(\mathbf{x}|\mathbf{\phi})\) as \(T_{s}(\mathbf{x})\sim\mathbf{\pi}(\mathbf{x}|\mathbf{\phi})\). Unlike DC-SNN, which uses a pre-trained SNN, the policy in SP-SNN is optimized jointly with the SNN based on an available training data set \[\mathcal{D}^{\text{tr}}=\{(\mathbf{x}^{\text{tr}}[i],c^{\text{tr}}[i])\}_{i=1}^{| \mathcal{D}^{\text{tr}}|} \tag{11}\] of \(|\mathcal{D}^{\text{tr}}|\) examples, whose data points are i.i.d. as for the calibration data set (4) and for the test data. Furthermore, unlike DC-SNN, SP-SNN does not make use of calibration data. Optimization in SP-SNN targets an objective function that depends on a combination of latency and accuracy. To be Fig. 2: (a) A _non-adaptive point_ classifier outputs a point decision \(\hat{c}(\mathbf{x})\) after having observed the entire time series \(\mathbf{x}\). (b) An _adaptive_ point classifier stops when the confidence level of the classifier passes a given threshold \(p_{\text{th}}\), producing a classification decision at an input-dependent time \(T_{s}(\mathbf{x})\). (c) A _non-adaptive set_ classifier produces a predicted set \(\Gamma(\mathbf{x})\) consisting of a subset of the class labels after having observed the entire time series \(\mathbf{x}\). (d) The _adaptive set_ classifiers presented in this work stop at the earliest time \(T_{s}(\mathbf{x})\) when the predicted set \(\Gamma(\mathbf{x}^{T_{s}(\mathbf{x})})\) is sufficiently informative, in the sense that its cardinality is below a given threshold \(d_{\text{th}}\) (in the figure we set \(d_{\text{th}}=2\)). The proposed SpikCP method can guarantee that the predicted set \(\Gamma(\mathbf{x})=\Gamma(\mathbf{x}^{T_{s}(\mathbf{x})})\) at the stopping time \(T_{s}(\mathbf{x})\) includes the true label with probability no smaller than the target probability \(p_{\text{aug}}\). specific, given a training example \((\mathbf{x},c)\in\mathcal{D}^{\mathbf{x}}\), SP-SNN takes an action \(T_{s}(\mathbf{x})\) derived by the policy, from which a _reward_ \[R\big{(}T_{s}(\mathbf{x})\big{)}=\begin{cases}1/2^{T_{s}(\mathbf{x})},&\hat{c}(\mathbf{x})=c, \\ -\zeta,&\text{otherwise},\end{cases} \tag{12}\] is provided to SP-SNN to optimize the policy ANN, where \(\zeta\) is a positive constant. Accordingly, if the prediction is correct, i.e., if \(\hat{c}(\mathbf{x})=c\), the reward (12) favors lower latencies by assigning a larger reward to a policy that produces a decision at an earlier time \(T_{s}(\mathbf{x})\). Conversely, if the prediction is wrong, a penalty \(\zeta\) is applied. Accuracy in SP-SNN is accounted for via the standard _cross-entropy loss_. For an example \((\mathbf{x},c)\) at stopping time \(T_{s}(\mathbf{x})\), this is defined as \[L(\mathbf{x}^{T_{s}(\mathbf{x})})=-\log p_{c}(\mathbf{x}^{T_{s}(\mathbf{x})}), \tag{13}\] where probability \(p_{c}(\mathbf{x}^{T_{s}(\mathbf{x})})\) is defined in (3). Accordingly, SP-SNN jointly optimizes the SNN parameters \(\mathbf{\theta}\) (see Sec. VI) and the policy network parameters \(\mathbf{\phi}\) by addressing the problem \[\min_{\mathbf{\theta},\mathbf{\phi}}\sum_{(\mathbf{x},c)\in\mathcal{D}^{\mathbf{x}}}\mathbb{E }[-R\big{(}T_{s}(\mathbf{x})\big{)}+L(\mathbf{x}^{T_{s}(\mathbf{x})})], \tag{14}\] where the expectation is taken over the probability distribution \(\mathbf{\pi}(\mathbf{x}|\mathbf{\phi})\). The problem is tackled via an alternate application of reinforcement learning for the optimization of parameters \(\mathbf{\phi}\) and of supervised learning for the optimization of parameters \(\mathbf{\theta}\). ## IV SpikeCP: Reliable Adaptive Set Classification The adaptive point classifiers reviewed in the previous section are generally characterized by a positive reliability gap (see Fig. 1(a)), unless the underlying SNN classifier is well calibrated or unless the calibration data set is large enough to ensure a reliable estimate of the true accuracy. In this section, we introduce _SpikeCP_, a novel inference methodology for adaptive classification that wraps around any pre-trained SNN model, guaranteeing the reliability requirement (6) - and hence a zero, or negative, reliability gap - irrespective of the quality of the SNN classifier and of the amount of calibration data. In the next section we discuss how to potentially improve the performance of SpikeCP by training tailored SNN models. ### _Stopping Time_ SpikeCP pre-determines a subset of possible stopping times, referred to as _checkpoints_, in set \(\mathcal{T}_{s}\subseteq\{1,...,T\}\). Set \(\mathcal{T}_{s}\subseteq\{1,...,T\}\) always includes the last time \(T\), and adaptivity is only possible if the cardinality of set \(\mathcal{T}_{s}\) is strictly larger than one. At each time \(t\in\mathcal{T}_{s}\), using the local spike count variables \(\mathbf{r}(\mathbf{x}^{t})\) or the global predictive probabilities \(\mathbf{p}(\mathbf{x}^{t})\), SpikeCP produces a candidate predicted set \(\Gamma(\mathbf{x}^{t})\subseteq\mathcal{C}\). Then, as illustrated in Fig. 2(d), the cardinality \(|\Gamma(\mathbf{x}^{t})|\) of the candidate predicted set \(\Gamma(\mathbf{x}^{t})\) is compared with a threshold \(I_{\rm th}\). If we have the inequality \[|\Gamma(\mathbf{x}^{t})|\leq I_{\rm th}, \tag{15}\] the predicted set is deemed to be sufficiently _informative_, and SpikeCP stops processing the input to produce set \(\Gamma(\mathbf{x}^{t})\) as the final decision \(\Gamma(\mathbf{x})\). As we detail next and as illustrated in Fig. 1(c), the candidate predicted sets \(\Gamma(\mathbf{x}^{t})\) are constructed in such a way to ensure a non-positive reliability gap simultaneously for _all_ checkpoints, and hence also at the stopping time. The overall procedure of SpikeCP is summarized in Algorithm 1. To construct the candidate predicted set \(\Gamma(\mathbf{x}^{t})\) at a checkpoint \(t\in\mathcal{T}_{s}\), SpikeCP follows the _split_, or _validation-based_, CP procedure proposed in [6] and reviewed in [7, 39]. Accordingly, using the local rate counts \(\mathbf{r}(\mathbf{x}^{t})\) or the global probabilities \(\mathbf{p}(\mathbf{x}^{t})\), SpikeCP produces a so-called _non-continuity (NC) score_ vector \(\mathbf{s}(\mathbf{x}^{t})=[s_{1}(\mathbf{x}^{t}),...,s_{C}(\mathbf{x}^{t})]\). Each entry \(s_{c}(\mathbf{x}^{t})\) of this vector is a measure of the _lack of confidence_ of the SNN classifier in label \(c\) given input \(\mathbf{x}^{t}\). The candidate predicted set \(\Gamma(\mathbf{x}^{t})\) is then obtained by including all labels \(c\in\mathcal{C}\) whose NC score \(s_{c}(\mathbf{x}^{t})\) is no larger than a threshold \(s_{\rm th}^{t}\), i.e., \[\Gamma(\mathbf{x}^{t})=\{c\in\mathcal{C}:s_{c}(\mathbf{x}^{t})\leq s_{\rm th}^{t}\}. \tag{16}\] As described in Sec. IV-B, the threshold \(s_{\rm th}^{t}\) is evaluated as a function of the target accuracy level \(p_{\rm targ}\), of the the calibration set \(\mathcal{D}^{\rm cal}\), and of the number of checkpoints \(|\mathcal{T}_{s}|\). We consider two NC scores, one locally computable at the output neurons and one requiring coordination among the output neurons. The _local NC score_ is defined as \[s_{c}(\mathbf{x}^{t})=t-r_{c}(\mathbf{x}^{t}). \tag{17}\] Intuitively, class \(c\) is assigned a lower NC score (17) - and hence a higher degree of confidence - if the spike count variable \(r_{c}(\mathbf{x}^{t})\) is larger. In contrast, the _global NC score_ is given by the standard _log-loss_ \[s_{c}(\mathbf{x}^{t})=-\log p_{c}(\mathbf{x}^{t}). \tag{18}\] ### _Evaluation of the Threshold_ As we detail in this subsection, the threshold \(s_{\rm th}^{t}\) in (16) is evaluated based on the calibration data set \(\mathcal{D}^{\rm cal}\) with the goal of ensuring the reliability condition (6) for a target accuracy level \(p_{\rm arg}\). The general methodology follows CP, with the important caveat that, in order to ensure a non-positive reliability gap simultaneously at all checkpoints, a form of _Bonferroni correction_ is applied. Alternative, heuristic, corrections are also described at the end of this section. Let us define as \(1-\alpha\), with \(\alpha\in(0,1)\), an auxiliary _per-checkpoint accuracy level_. Suppose that we can guarantee the _per-checkpoint reliability condition_ \[\Pr(c\in\Gamma(\mathbf{x}^{t}))\geq 1-\alpha \tag{19}\] Fig. 3: CP meets condition (19) by choosing the threshold \(s_{\rm th}^{t}\) in (16) as the \(\lceil(1-\alpha)(|\mathcal{D}^{\rm cal}|+1)\rceil\)-th smallest value among the NC scores evaluated in the calibration set. for all checkpoints \(t\in\mathcal{T}_{s}\). In (19), the probability is taken over the distribution of the test and calibration data. We will see below that this condition can be guaranteed by leveraging the toolbox of CP. Then, by the union bound, we also have the reliability condition \[\Pr(c\in\Gamma(\mathbf{x}^{t})\text{ for all }t\in\mathcal{T}_{s})\geq 1-|\mathcal{T}_{s}|\alpha, \tag{20}\] which applies simultaneously across all checkpoints. This inequality implies that we can guarantee the condition (6) by setting \(\alpha=(1-p_{\text{targ}})/|\mathcal{T}_{s}|\), since the stopping point \(T_{s}(\mathbf{x})\) is in set \(\mathcal{T}_{s}\) by construction. This is a form of _Bonferroni correction_, whereby the target accuracy for the test carried out at each checkpoint is increased in order to ensure reliability simultaneously for the tests at all checkpoints [32]. This increase is linear in the number of checkpoints \(|\mathcal{T}_{s}|\), and it guarantees the desired reliability condition irrespective of the underlying distribution of the data, as long as the per-checkpoint inequality (19) is satisfied. The remaining open question is how to ensure the per-checkpoint reliability condition (19). To address this goal, we follow the standard CP procedure. Accordingly, during an _offline_ phase, for each calibration data point \((\mathbf{x}[i],c[i])\), with \(i=1,...,|\mathcal{D}^{\text{cal}}|\), SpikeCP computes the NC score \(s^{t}[i]=s_{c[i]}(\mathbf{x}^{t}[i])\) at each checkpoint \(t\in\mathcal{T}_{s}\). The calibration NC scores \(\{s^{t}[i]\}_{i=1}^{|\mathcal{D}^{\text{cal}}|}\) are ordered from smallest to largest, with ties broken arbitrarily, separately for each checkpoint \(t\). Finally, the threshold \(s_{\text{th}}^{t}\) is selected to be approximately equal to the smallest value that is larger than a fraction \((1-\alpha)\) of the calibration NC scores (see Fig. 3). More precisely, assuming \(\alpha\geq 1/(|\mathcal{D}^{\text{cal}}|+1)\) we set [6, 7] \[s_{\text{th}}^{t}=\lceil(1-\alpha)(|\mathcal{D}^{\text{cal}}|+1)\rceil \text{-th smallest value}\\ \text{in the set }\{s^{t}[i]\}_{i=1}^{|\mathcal{D}^{\text{cal}}|}, \tag{21}\] while for \(\alpha<1/(|\mathcal{D}^{\text{cal}}|+1)\) we set \(s_{\text{th}}^{t}=\infty\). This is illustrated in Fig. 3. ### _Reliability Guarantees of SpikeCP_ In this subsection, we show that SpikeCP, as summarized in Algorithm 1, satisfies the reliability condition (6). **Theorem 1** (Reliability of SpikeCP).: _The adaptive decision \(\Gamma(\mathbf{x})=\Gamma(\mathbf{x}^{T_{s}(\mathbf{x})})\) produced by SpikeCP, as described in Algorithm 1, satisfies the reliability condition (6), and hence has a non-positive reliability gap, i.e., \(\triangle R\leq 0\)._ Proof.: By the properties of CP, the threshold (21) ensures the per-checkpoint reliability condition (19) (see, e.g., [40, Theorem 1] and [6, 41, 42]). As proved in the last subsection, this is sufficient to conclude that the reliability condition (6) is satisfied. We refer to Appendix for further details. ### _An Alternative Heuristic Threshold Selection_ The theoretical guarantees of SpikeCP in Theorem 1 rely on the Bonferroni correction that sets the per-checkpoint target accuracy level to \(1-\alpha=1-(1-p_{\text{targ}})/|\mathcal{T}_{s}|\). This requirement becomes increasingly stricter, and hence harder to satisfy, as the number of checkpoints \(|\mathcal{T}_{s}|\) increases. However, having a large number of checkpoints may be advantageous by enhancing the granularity of delay adaptivity. In this subsection, we introduce an alternative, heuristic, choice for the per-checkpoint reliability condition based on Simes correction [9]. The approach sets a different target \(1-\alpha_{t}\) for each checkpoint \(t\in\mathcal{T}_{s}\), by imposing the constraint \[\Pr(c\in\Gamma(\mathbf{x}^{t}))\geq 1-\alpha_{t} \tag{22}\] in lieu of the constant-target condition (19). For each time step \(t\in\mathcal{T}_{s}\), let us define \(i_{t}\) for the index that runs across the checkpoints as \(i_{t}=\sum_{t^{\prime}\in\mathcal{T}_{s}}\mathbb{1}(t^{\prime}\leq t)\). Then, the target reliability for the checkpoint at time \(t\in\mathcal{T}_{s}\) is set to \(1-\alpha_{t}\) with \[\alpha_{t}=i_{t}\cdot\frac{(1-p_{\text{targ}})}{|\mathcal{T}_{s}|}. \tag{23}\] Accordingly, for the first checkpoint \(t\), with \(i_{1}=1\), the target coincides with that obtained from Bonferroni correction, i.e., \(\alpha_{t}=(1-p_{\text{targ}})/|\mathcal{T}_{s}|\); while for the last checkpoint, with \(i_{t}=|\mathcal{T}_{s}|\), it corresponds to the target accuracy level, i.e., \(\alpha_{t}=1-p_{\text{targ}}\). Using Simes correction (23) in step 3 in Algorithm 1 in lieu of \(\alpha_{t}=(1-p_{\text{targ}})/|\mathcal{T}_{s}|\), yields an alternative version of SpikeCP that is guaranteed to meet the reliability condition (5) only under additional assumptions that are hard to verify in practice (see Appendix). One of such assumptions is that the accuracy of SNN never decreases with increased time steps, as posited, e.g., in [3, Assumption 3.1]. Given this limitation, we propose Simes correction here merely as a heuristic, which may yield some practical gains as demonstrated in Sec. VI-C (see Fig. 8). ## V SpikeCP-Based Training While SpikeCP provides guarantees on the reliability of its set-valued decisions irrespective of the quality of the pre-trained SNN (see Theorem 1), the achievable trade-offs between average delay and energy consumption, on the one hand, and informativeness of the set predictor, on the other, generally depend on the performance of the underlying SNN-based classifier. In this section, we introduce a training strategy - referred to as _SpikeCP-based training_ - that, unlike conventional learning algorithms for SNNs (see, e.g., [10, 34]), targets directly the performance of the SNN when used in conjunction with SpikeCP. ### _Training Objective_ In order to describe the training objective of SpikeCP-based training, we start by recalling from Sec. IV-A that the stopping time of SpikeCP is determined by the size \(|\Gamma(\mathbf{x}^{t})|\) of the predicted set \(\Gamma(\mathbf{x}^{t})\) for input \(\mathbf{x}\) as per the threshold rule (15) with target set size \(I_{\rm th}\). Therefore, to reduce the average latency, one can train the SNN with the aim at minimizing the sizes \(|\Gamma(\mathbf{x}^{t})|\) of the predicted sets \(\Gamma(\mathbf{x}^{t})\) in (16) produced by SpikeCP over time instants \(t\) with the set \(\mathcal{T}_{s}\) of candidates checkpoints. To this end, the model parameters \(\mathbf{\theta}\) are optimized on the basis of the training set (11). Specifically, in order to mimic the test-time distinction between calibration and test data leveraged by SpikeCP, we randomly partition the training set \(\mathcal{D}^{\rm tr}\) into two disjoint subsets \(\mathcal{D}^{\rm t,cal}\) and \(\mathcal{D}^{\rm tr,le}\) with \(\mathcal{D}^{\rm t,cal}\cap\mathcal{D}^{\rm tr,le}=\emptyset\) and \(\mathcal{D}^{\rm tr,cal}\cup\mathcal{D}^{\rm tr,le}=\mathcal{D}^{\rm tr}\). Given a data set split \((\mathcal{D}^{\rm tr,cal},\mathcal{D}^{\rm tr,le})\), we run SpikeCP (Algorithm 1) with \(\mathcal{D}^{\rm tr,cal}\) in lieu of the calibration data \(\mathcal{D}^{\rm cal}\), and with the input parts of the data points in the set \(\mathcal{D}^{\rm tr,le}\) as the test inputs \(\mathbf{x}\). For each such test input \(\mathbf{x}\) in \(\mathcal{D}^{\rm tr,le}\), SpikeCP returns the predictive set \(\Gamma(\mathbf{x}^{t})\) for all times \(t\in\mathcal{T}_{s}\). In line with the motivation explained in the previous paragraph, we consider the set sizes \(|\Gamma(\mathbf{x}^{t})|\) for all times \(t\) in the checkpoint set \(\mathcal{T}_{s}\) as the target of the training process. To quantify the mentioned predictive set sizes using training data, we define the _efficiency training loss_ \[\mathcal{L}^{E}(\mathbf{\theta})=\sum_{\begin{subarray}{c}\mathcal{D}^{\rm real} \subset\mathcal{D}^{\rm tr}\\ \mathcal{D}^{\rm tr,le}=\mathcal{D}^{\rm tr}\setminus\mathcal{D}^{\rm real} \end{subarray}}\sum_{(\mathbf{x},c)\in\mathcal{D}^{\rm min}}\sum_{t\in\mathcal{T }_{s}}|\Gamma(\mathbf{x}^{t})|. \tag{24}\] The outer sum in (24) is over a number of splits realized by randomly sampling the subset \(\mathcal{D}^{\rm tr,cal}\subset\mathcal{D}^{\rm tr}\) for a fixed given number of calibration data points \(|\mathcal{D}^{\rm tr,cal}|<|\mathcal{D}^{\rm tr}|\); the middle sum is over the test data points in set \(\mathcal{D}^{\rm tr,le}=\mathcal{D}^{\rm tr}\setminus\mathcal{D}^{\rm tr,cal}\); and the inner sum is over the time instants in the checkpoint set \(\mathcal{T}_{s}\). The efficiency training loss \(\mathcal{L}^{E}(\mathbf{\theta})\) in (24) does not make use of the labels of the test data sets, and it does not directly target the accuracy of the SNN classifier. In a manner somewhat similar to the criterion (14) used by SP-SNN, we hence propose to complement the efficiency training loss with the standard _cross-entropy training loss_ as \[\mathcal{L}^{C}(\mathbf{\theta})=-\sum_{\begin{subarray}{c}\mathcal{D}^{\rm real} \subset\mathcal{D}^{\rm tr}\\ \mathcal{D}^{\rm tr,le}=\mathcal{D}^{\rm tr}\setminus\mathcal{D}^{\rm real} \end{subarray}}\sum_{(\mathbf{x},c)\in\mathcal{D}^{\rm min}}\sum_{t\in\mathcal{T }_{s}}\log p_{c}(\mathbf{x}^{t}), \tag{25}\] where \(p_{c}(\mathbf{x}^{t})\) is the probability value assigned by the model to input \(\mathbf{x}^{t}\) for class \(c\) using (3). The sums in (25) are evaluated as for the efficiency training loss (24). Overall, we propose to optimize the parameter vector \(\mathbf{\theta}\) of SNN by addressing the problem \[\min_{\mathbf{\theta}}\mathcal{L}^{C}(\mathbf{\theta})+\lambda\mathcal{L}^{E}(\mathbf{ \theta}) \tag{26}\] for a hyperparameter \(\lambda\geq 0\) that dictates the trade-off between cross-entropy and efficiency criteria. With \(\lambda=0\) and \(\mathcal{T}_{s}=\{T\}\), this training objective recovers the conventional cross-entropy evaluated at the last time instant adopted in most of the literature on SNN-based classification (see, e.g., [10, 38]). ### _Training_ The gradient of the standard cross-entropy objective \(\mathcal{L}^{C}(\mathbf{\theta})\) can be approximated via well-established surrogate gradient methods that apply the straight-through estimator of the gradient [34, 35]. Accordingly, when applying backpropagation, while the forward pass uses the actual non-differentiable activation model (8) of the SRM neurons, the backward pass replaces the non-differentiable spiking threshold function (8) with a smooth sigmoidal function [34, 35]. For each neuron \(k\) at time \(t\), this yields the differentiable activation \[\hat{b}_{k,t}=\sigma(o_{k,t}-\vartheta), \tag{27}\] where the Heaviside step function \(\Theta(\cdot)\) in (8) is replaced by the sigmoid function \(\sigma(x)=1/(1+e^{-x})\). Given the availability of surrogate gradient methods, the main new challenge in tackling problem (26) lies in the evaluation of the gradient of the criterion \(\mathcal{L}^{E}(\mathbf{\theta})\). The rest of this section focuses on this problem. The efficiency training loss \(\mathcal{L}^{E}(\mathbf{\theta})\) in (24) depends on the cardinality \(|\Gamma(\mathbf{x}^{t})|\), which is a non-differentiable function of the model parameters \(\mathbf{\theta}\), even when considering the surrogate SNN model with activation function in (27). In fact, the NC scores \(s_{c}(\mathbf{x}^{t})\) in (17) or (18) are differentiable in \(\mathbf{\theta}\) under the surrogate model (27), but this is not the case for the cardinality \(|\Gamma(\mathbf{x}^{t})|\) of the predicted set. To see this, observe that cardinality \(|\Gamma(\mathbf{x}^{t})|\) is obtained via a cascade of two non-differentiable functions of the scores \(s^{t}[i]\), \(i=1,...,|\mathcal{D}^{\rm cal}|\): (_i) Sorting_: By Algorithm 1, SpikeCP sorts the calibration scores \(s^{t}[i]\), \(i=1,...,|\mathcal{D}^{\rm cal}|\), to obtain the threshold \(s^{t}_{\text{th}}\) via (21) at each checkpoint time \(t\in\mathcal{T}_{s}\); (_ii) Counting_: The cardinality \(|\Gamma(\mathbf{x}^{t})|\) of the set predictor is obtained by counting the number of labels \(c\) whose score \(s_{c}(\mathbf{x}^{t})\) is no larger than the threshold \(s^{t}_{\text{th}}\), i.e., \(|\Gamma(\mathbf{x}^{t})|=\sum_{t=1}^{C}\mathbbm{1}\left(s_{c}(\mathbf{x}^{t})\leq s^{t}_{ \text{th}}\right)\). In the next subsection, we introduce a differentiable approximation \(|\Gamma(\mathbf{x}^{t})|\) of the cardinality function \(|\Gamma(\mathbf{x}^{t})|\) under the smooth activation (31). The approach follows prior art on CP-aware training [27, 28, 29]. ### _Differentiable Threshold and Set Cardinality_ The threshold \(s_{\text{th}}^{t}\) in (21) amounts to the \((1-\alpha)\)-empirical quantile of the calibration scores \(s^{t}[i]\), \(i=1,...,|\mathcal{D}^{\text{cal}}|\). Given \(\mathcal{D}^{\text{tr,cal}}\), this can be obtained as the solution of the problem \[s_{\text{th}}^{t}=\arg\min_{s\in\{s^{t}[i]\}_{i=1}^{|\mathcal{D}^{\text{cal}}| }\big{(}\rho_{1-\alpha}(s|\{s^{t}[i]\}_{i=1}^{|\mathcal{D}^{\text{cal}}|}\cup \{\infty\})\big{)} \tag{28}\] where we have defined the _pinball loss_ as \[\rho_{1-\alpha}(a|\{a[i]\}_{i=1}^{M}) =\alpha\sum_{i=1}^{M}\text{ReLU}(a-a[i])+\] \[(1-\alpha)\sum_{i=1}^{M}\text{ReLU}(a[i]-a), \tag{29}\] for \(M\) real numbers \(\{a[i]\}_{i=1}^{M}\) with \(\text{ReLU}(a)=\max(0,a)\). The solution of problem (28) can be approximated by replacing the minimum with a _soft minimum_ function \(\delta(x_{i})=e^{-x_{i}}/\sum_{j}e^{-x_{j}}\). Accordingly, a differentiable estimate of the threshold \(s_{\text{th}}^{t}\) can be written as [29] \[\hat{s}_{\text{th}}^{t}=\sum_{i=1}^{|\mathcal{D}^{\text{tr,cal}}|+1}s^{t}[i] \delta\big{(}\frac{\rho_{1-\alpha}(s^{t}[i]\{s^{t}[i]\}_{i=1}^{|\mathcal{D}^{ \text{tr,cal}}|+1})}{c_{Q}}\big{)}, \tag{30}\] where we have defined \(s^{t}[|\mathcal{D}^{\text{tr,cal}}|+1]=\max(\{s^{t}[i]\}_{i=1}^{|\mathcal{D}^ {\text{tr,cal}}|})+\beta\) for some sufficiently large parameter \(\beta>0\). In (30), the hyperparameter \(c_{Q}>0\) dictates the trade-off between smoothness and accuracy of the approximation. With small enough \(c_{Q}\), the smoothed threshold \(\hat{s}_{\text{th}}^{t}\) recovers the original value \(s_{\text{th}}^{t}\), in the sense that we have the limit \(\lim_{c_{Q}\to 0}\hat{s}_{\text{th}}^{t}=s_{\text{th}}^{t}\). Based on the differentiable approximation \(\hat{s}_{\text{th}}^{t}\) introduced in the previous subsection, we can approximate the cardinality \(|\Gamma(\mathbf{x}^{t})|\) as the sum \(\sum_{c=1}^{C}\mathbb{1}\left(s_{c}(\mathbf{x}^{t})\leq\hat{s}_{\text{th}}^{t}\right)\). Since the indicator function \(\mathbb{1}(\cdot)\) is also not differentiable, we replace the indicator function with the sigmoid function \(\sigma(x)\) to obtain the following final differentiable approximation of the size of the set predictor \[|\hat{\Gamma}(\mathbf{x}^{t})|=\sum_{c=1}^{C}\sigma\big{(}\hat{s}_{\text{th}}^{t} -s_{c}(\mathbf{x}^{t})\big{)}. \tag{31}\] ## VI Experiments In this section, we provide experimental results to compare the performance of the adaptive point classifier DC-SNN [2], described in Sec. III, and of the proposed set classifier SpikeCP. We also provide insights into the trade-off between delay/energy and informativeness enabled by SpikeCP, as well as into the benefits of SpikeCP-based training. Finally, we offer a numerical comparison between the performance levels obtained by SpikeCP with Bonferroni and Simes corrections. All the experiments were run over a GPU server with single NVIDIA A100 card. ### _Setting_ We consider the MNIST-DVS dataset [5], which contains labelled \(26\times 26\) spiking signals of duration \(T=80\) samples. Each data point contains \(26\times 26=676\) spiking signals, which are recorded from a DVS camera that is shown moving handwritten digits from \(``0"\) to \(``9"\) on a screen. The data set contains \(8,000\) training examples, as well as \(2,000\) examples used for calibration and testing. We adopt a fully connected SNN with one hidden layer having \(1,000\) neurons, which is trained via the surrogate gradient method as in [37]. Except for the SpikeCP-based training, all the results reported in this section adopt a pre-trained SNN that is trained by assuming \(\lambda=0\) and \(\mathcal{T}_{s}=\{T\}\) as discussed in Sec V, given training data set \(\mathcal{D}^{\text{tr}}\) that is consisted of \(8,000\) examples, with each class containing \(800\) examples. The calibration data set \(\mathcal{D}^{\text{cal}}\) is obtained by randomly sampling \(|\mathcal{D}^{\text{cal}}|\) examples from the \(2,000\) data points allocated for calibration and testing, with the rest used for testing (see, e.g., [25]). We average the performance measures introduced in Sec. II-C over \(50\) different realizations of calibration and test data set. For SpikeCP, we assume the set of possible checkpoints as \(\mathcal{T}_{s}=\{20,40,60,80\}\), and use the global NC score (18) for SpikeCP, and we set the target set size to \(I_{\text{th}}=5\), unless specified otherwise. In this work, we implement the policy network of SP-SNN as a recurrent neural network (RNN) with one hidden layer having \(500\) hidden neurons equipped with Tanh activation, followed by \(T=80\) output neurons with a softmax activation function. The RNN takes the time series data \(\mathbf{x}=\{\mathbf{x}_{t}\}_{t=1}^{T}\) as input, and outputs a probability vector \(\mathbf{\pi}(\mathbf{x}|\mathbf{\phi})\). The stopping time is chosen as \(T_{s}(\mathbf{x})=\arg\max_{t\in\{1,...,T\}}\pi_{t}(\mathbf{x}|\mathbf{\phi})\) during the testing phase. The choice of a light-weight RNN architecture for policy network is dictated by the principle of ensuring that the size of the additional ANN is comparable to that of the SNN classifier [3]. For SpikeCP-based training, we assume data is split by considering the actual number of calibration data, i.e., \(|\mathcal{D}^{\text{tr,cal}}|=\min\{|\mathcal{D}^{\text{cal}}|,|\mathcal{D}^{ \text{tr}}|/2\}\), which also ensures a non-empty set \(\mathcal{D}^{\text{tr,be}}\). The hyperparameters \(c_{Q}\) and \(\beta\) are set to \(0.001\) and \(1\), respectively. The weight factor \(\lambda\) is set to 0.01, and the target accuracy level is set to \(0.9\), i.e., \(\alpha=0.1\). ### _Performance Analysis of SpikeCP with a Pre-Trained SNN_ We start by evaluating the performance with the same pre-trained SNN model for all schemes. Fig. 4 reports accuracy - \(\Pr(c=\hat{c}(\mathbf{x}))\) for DC-SNN and \(\Pr(c\in\Gamma(\mathbf{x}))\) for SpikeCP - and normalized latency \(\mathbb{E}[T_{s}(\mathbf{x})]/T\) as a function of the target accuracy \(p_{\text{targ}}\) for different sizes \(|\mathcal{D}^{\text{cal}}|\) of the calibration data set. The accuracy plots highlight the regime in which we have a positive reliability gap \(\Delta R\) in (5) and (6), which corresponds to _unreliable_ decisions. For reference, in Fig. 4(a), we show the performance obtained by setting the threshold \(p_{\text{th}}\) in (9) to the accuracy target \(p_{\text{targ}}\). Following the results reported in Fig. 1(b), this approach yields unreliable decisions as soon as the target accuracy level is sufficiently large, here larger than \(0.7\). By leveraging calibration data, DC-SNN can address this problem, suitably increasing the decision latency as \(p_{\text{targ}}\) increases. However, reliability - i.e., a non-positive reliability gap - is only approximately guaranteed when the number of calibration data points is sufficiently large, here \(|\mathcal{D}^{\text{cal}}|=100\). In contrast, as shown in Fig. 4(b) and proved in Theorem 1, SpikeCP is always reliable, achieving a non-positive reliability gap irrespective of the number of calibration data points. With a fixed threshold \(I_{\text{th}}\), as in this example, increasing the size \(|\mathcal{D}^{\text{cal}}|\) of the calibration data set has the effect of significantly reducing the average latency. The trade-off supported by SpikeCP between latency and energy, on the one hand, and informativeness, i.e., set size, on the other hand, is investigated in Fig. 5 by varying the target set size \(I_{\text{th}}\), with target accuracy level \(p_{\text{arg}}=0.9\) and \(|\mathcal{D}^{\text{cal}}|=200\) calibration examples. Note that the reliability gap is always negative as in Fig. 4(b), and is hence omitted in the figure to avoid clutter. Increasing the target set size, \(I_{\text{th}}\), causes the final predicted set size, shown in the figure normalized by the number of classes \(C=10\), to increase, yielding less informative decisions. On the flip side, sacrificing informativeness entails a lower (normalized) latency, as well as, correspondingly, a lower inference energy, with the latter shown in the figure as the average number of spikes per sample and per hidden neuron, \(\mathbb{E}[S(\mathbf{x})]/(1000T)\). In Fig. 6, we show the performance of SpikeCP when using either local NC scores (17) or global NC scores (18) (see Sec. IV), as well as the performance of the DC-SNN and SP-SNN point predictors, as a function of the number of checkpoints \(|\mathcal{T}_{s}|\), for \(p_{\text{targ}}=0.9\), \(|\mathcal{D}^{\text{cal}}|=200\), and \(I_{\text{th}}=5\). The checkpoints are equally spaced among the \(T\) time steps, and hence the checkpoint set is \(\mathcal{T}_{s}=\{T/|\mathcal{T}_{s}|,2T/|\mathcal{T}_{s}|,...,T\}\). The metrics displayed in the four panels are the accuracy - probability \(\Pr(c=\hat{c}(\mathbf{x}))\) for point predictors and probability \(\Pr(c\in\Gamma(\mathbf{x}))\) for set predictors - along with normalized latency \(\mathbb{E}[T_{s}(\mathbf{x})]/T\) and normalized, per-neuron and per-time step, inference energy \(\mathbb{E}[S(\mathbf{x})]/(1000T)\). Note that the operation of SP-SNN and DC-SNN does not depend on the number of checkpoints, and hence the performance of these schemes is presented as a constant function. By Theorem 1, SpikeCP always achieves negative reliability gap, while SP-SNN and DC-SNN fall short of the target reliability \(p_{\text{arg}}\) in this example. Using global NC scores with SpikeCP yields better performance in terms of informativeness, and hence the performance of the SP-SNN and DC-SNN are comparable. Fig. 4: (a) Accuracy \(\Pr(c=\hat{c}(\mathbf{x}))\) and normalized latency \(\mathbb{E}[T_{s}(\mathbf{x})]/T\) for the DC-SNN point classifier [2]. (b) Accuracy \(\Pr(c\in\Gamma(\mathbf{x}))\) and normalized latency \(\mathbb{E}[T_{s}(\mathbf{x})]/T\) for the proposed SpikeCP set predictor given the target set size \(I_{\text{th}}=5\). The shaded error bars correspond to intervals covering \(95\%\) of the realized values, obtained from \(50\) different draws of calibration data. Fig. 5: Normalized latency, inference energy, and set size (informativeness) as a function of target set size \(I_{\text{th}}\) for SpikeCP, assuming \(p_{\text{arg}}=0.9\) and \(|\mathcal{D}^{\text{cal}}|=200\) under the same conditions as Fig. 4. Fig. 6: Accuracy, normalized latency, normalized set size (informativeness), and normalized inference energy as a function of number of checkpoints \(|\mathcal{T}_{s}|\) for SpikeCP with local and global scores, as well as for DC-SNN and SP-SNN point classifiers, with \(p_{\text{targ}}=0.9\), \(|\mathcal{D}^{\text{cal}}|=200\), and \(I_{\text{th}}=5\). ness, i.e., set size, as well as latency and inference energy. The performance gap between the two choices of NC scores increases with the number of checkpoints, demonstrating that local NC scores are more sensitive to the Bonferroni correction applied by SpikeCP (see Sec. 4). This is due to the lower discriminative power of local confidence levels, which yield less informative NC scores (see, e.g., [8]). That said, moderate values of latency and inference energy can also be obtained with local NC scores, without requiring any coordination among the readout neurons. This can be considered to be one of the advantages of the calibration afforded by the use of SpikeCP. With global NC scores, the number of checkpoints \(|\mathcal{T}_{\mathrm{s}}|\) is seen to control the trade-off between latency and informativeness for SpikeCP. In fact, a larger number of checkpoints improves the resolution of the stopping times, while at the same time yielding more conservative set-valued decision at each time step due to the mentioned Bonferroni correction. In Fig. 7, we show the performance of SpikeCP with global NC scores, DC-SNN, and SP-SNN as a function of the number, \(|\mathcal{D}^{\mathrm{cal}}|\), of calibration data points, with \(p_{\mathrm{targ}}=0.9\), \(|\mathcal{T}_{\mathrm{s}}|=4\), and \(I_{\mathrm{th}}=5\). The general conclusions around the comparisons among the different schemes are aligned with those presented above for Fig. 6. The figure also reveals that SP-SNN outperforms DC-SNN when the calibration data set is small, while DC-SNN is preferable in the presence of a sufficiently large data set. Finally, with a larger calibration data set, SpikeCP is able to increase the informativeness of the predicted set, while also decreasing latency and inference energy. ### _Comparing Bonferroni and Simes Corrections_ In Fig. 8, we study the performance of SpikeCP, which uses Bonferroni correction (see Sec. IV-B), with a heuristic variant of SpikeCP that uses Simes correction (see Sec. IV-D) with \(p_{\mathrm{targ}}=0.8\) and \(p_{\mathrm{targ}}=0.9\). Fig. 8 plots accuracy and normalized latency as a function of the number of checkpoints. As discussed in Sec. IV-D, the Bonferroni correction applied by SpikeCP becomes increasingly strict as the number of checkpoints increases. Accordingly, alternative correction factors, such as Simes, may become advantageous in the regime of large number of checkpoints. Confirming this argument, the figures show that indeed Simes correction can yield some advantage in terms of latency, while still satisfying, despite its lack of theoretical guarantees, the reliability requirement (5). ### _Performance Analysis of SpikeCP-based Training_ We finally turn to analyzing the potential benefits of SikeCP-based training, as introduced in Sec. V. Accordingly, the SNN classifier is trained by maximizing the objective in (26), with hyperparameter \(\lambda\) dictating the relative weight given to the prediction set efficiency over the conventional cross-entropy performance metric. With \(\lambda=0\), we recover the same SNN model assumed throughout the rest of the section, while larger values of \(\lambda>0\) ensure that the training model is increasingly tailored to the use of SpikeCP during inference by targeting the predictive set inefficiency. In order to elaborate on the choice of hyperparameter \(\lambda\), in Fig. 9 we plot the normalized latency of SpikeCP as a function of \(\lambda\). For both target accuracy values \(p_{\mathrm{targ}}=0.8\) and \(p_{\mathrm{targ}}=0.9\), it is observed that there is an optimal value of \(\lambda\) that Fig. 8: Accuracy and normalized latency as a function of number of checkpoints \(|\mathcal{T}_{\mathrm{s}}|\) for SpikeCP, which uses the Bonferroni correction, as well as for a variant that applies Simes correction (see Sec. IV-D), with \(p_{\mathrm{targ}}=0.8\) and \(p_{\mathrm{targ}}=0.9\). Fig. 7: Accuracy, normalized latency, normalized set size (informativeness), and normalized inference energy as a function of number \(|\mathcal{D}^{\mathrm{cal}}|\) of calibration data points for SpikeCP with global NC scores, as well as for DC-SNN and SP-SNN point classifiers, with \(p_{\mathrm{targ}}=0.9\), \(|\mathcal{T}_{\mathrm{s}}|=4\), and \(I_{\mathrm{th}}=5\). Fig. 9: Normalized latency as a function of the weight factor \(\lambda\) in the training objective (26) for training-based SpikeCP under target accuracy \(p_{\mathrm{targ}}=0.8\) and \(p_{\mathrm{targ}}=0.9\), assuming \(|\mathcal{D}^{\mathrm{cal}}|=200\) calibration data points with the same other conditions as in Fig. 4. balances the inefficiency and accuracy (cross-entropy) criteria. Increasing \(\lambda\) is initially beneficial, yielding smaller predictive sets and hence smaller latencies. However, larger values of \(\lambda\) eventually downweigh excessively the accuracy criterion, producing worse performance. Furthermore, the optimal value of \(\lambda\) is seen to be decreasing with growing target reliability levels \(p_{\text{targ}}\), which call for more emphasis on the cross-entropy criterion. We now turn to comparing the performance of SpikeCP-based training with conventional SpikeCP (with \(\lambda=0\)), DC-SNN and SP-SNN. Specifically, Fig. 10 plots accuracy and normalized latency as a function of the number of training data points \(|\mathcal{D}^{\text{tr}}|\). The point classifiers DC-SNN and SP-SNN exhibit an increasing accuracy level as the training data set size increases, while still failing to meet the reliability target \(p_{\text{targ}}=0.9\). In contrast, SpikeCP schemes meet the reliability requirement for any number of training data points. More training data translate into a lower latency, with SpikeCP-based training, here run with \(\lambda=0.01\), proving an increasingly sizeable latency reduction. ## VII Conclusions In this work, we have introduced SpikeCP, a delay-adaptive SNN set predictor with provable reliability guarantees. SpikeCP wraps around any pre-trained SNN classifier, producing a set classifier with a tunable trade-off between informativeness of the decision - i.e., size of the predicted set - and latency, or inference energy as measured by the number of spikes. Unlike prior art, the reliability guarantees of SpikeCP hold irrespective of the quality of the pre-trained SNN and of the number of calibration points, with minimal added complexity. SpikeCP was also integrated with a CP-aware training strategy that complements the conventional cross-entropy criterion with a regularizer accounting for the informativeness of the predicted set. Among directions for future work, we highlight extensions of SpikeCP that take into account time decoding or Bayesian learning [43] in order to further reduce the number of spikes and enhance the reliability of confidence estimates. ## Appendix: CP and Hypothesis Testing As detailed in Sec. IV, SpikeCP relies on the use of the Bonferroni, or Simes, corrections, which are tools introduced in the literature on hypothesis testing [44]. In this appendix, we elaborate on the connection between CP and multiple-hypothesis testing. Conventional CP effectively applies a binary hypothesis test for each possible label \(c\), testing the null hypothesis that the label \(c\) is the correct one. With the notation of this paper, for any fixed time \(t\), CP considers the null hypothesis \[\mathcal{H}_{t}(\mathbf{x}^{t},c):(\mathbf{x}^{t},c)\text{ and the calibration data }\mathcal{D}^{t,\text{cal}}\text{ are i.i.d.},\] where we have defined \(\mathcal{D}^{t,\text{cal}}=\{(\mathbf{x}^{t}[i],c[i])\}_{i=1}^{|\mathcal{D}^{ \text{cal}}|}\). In fact, if this hypothesis holds true, label \(c\) is the ground-truth label for input \(\mathbf{x}^{t}\). Suppose that we have a valid \(p\)-variable \(p_{t}(\mathbf{x}^{t},c)\) for this hypothesis, i.e., a random variable - which may be also a function of the calibration data - that satisfies the inequality \(\Pr(p_{t}(\mathbf{x}^{t},c)\leq\alpha^{\prime}|\mathcal{H}_{t}(\mathbf{x}^{t},c))\leq \alpha^{\prime}\) for all \(\alpha^{\prime}\in[0,1]\), where the probability is conditioned over the hypothesis being correct. Then, constructing the predictive set as \(\Gamma(\mathbf{x}^{t})=\{c\in\mathcal{C}:p_{t}(\mathbf{x}^{t},c)>\alpha^{\prime}\}\) would guarantee the reliability condition \(\Pr\left(c\in\Gamma(\mathbf{x}^{t})\right)\geq 1-\alpha^{\prime}\) for the given fixed time \(t\). The key underlying technical result in the theory of CP is that the variable \[p_{t}(\mathbf{x}^{t},c)=\frac{1+\sum_{i=1}^{|\mathcal{D}^{\text{cal}}|}\mathbb{1} \left(s_{c}(\mathbf{x}^{t})\leq s_{c[i]}(\mathbf{x}^{t}[i])\right)}{|\mathcal{D}^{ \text{cal}}|+1}, \tag{32}\] is a valid \(p\)-variable for time \(t\), where \(s_{c}(\mathbf{x}^{t})\) is an NC score. The predictive set constructed by \(p\)-value \(\Gamma(\mathbf{x}^{t})=\{c\in\mathcal{C}:p_{t}(\mathbf{x}^{t},c)>\alpha\}\) is equivalent to the expression of (16) since _excluding_ the \(\alpha\)-fraction (\(p_{t}(\mathbf{x}^{t},c)>\alpha\)) is equivalent to _including_ the \((1-\alpha)\)-fraction (\(s_{c}(\mathbf{x}^{t})\leq s_{\text{th}}^{t}\)). In SpikeCP, the time \(t=T_{s}(\mathbf{x})\) at which a decision is made depends on the input \(\mathbf{x}\), and hence the reliability guarantees described above do not apply directly. What is needed, instead, are _corrected_\(p\)-variables \(\tilde{p}_{t}(\mathbf{x}^{t},c)\) satisfying the property \(\Pr\left(\tilde{p}_{t}(\mathbf{x}^{t},c)>\alpha^{\prime}\text{ for all }t\in\mathcal{T}_{s}|\mathcal{H}(\mathbf{x},c)\right)\geq 1- \alpha^{\prime}\) for all \(\alpha^{\prime}\in[0,1]\), where, under the composite null hypothesis \(\mathcal{H}(\mathbf{x},c)\), the pair \((\mathbf{x},c)\) and the calibration data \(\mathcal{D}^{\text{cal}}\) are i.i.d. Note that the hypothesis \(\mathcal{H}(\mathbf{x},c)\) implies all hypotheses \(\mathcal{H}_{t}(\mathbf{x}^{t},c)\) for \(t\in\mathcal{T}_{s}\). To find such corrected \(p\)-variables, it is sufficient to identify a valid \(p\)-variable \(p(\mathbf{x},c)\) for the composite hypothesis \(\mathcal{H}(\mathbf{x},c)\) such that, with probability 1, we have the inequalities \(\tilde{p}_{t}(\mathbf{x}^{t},c)\geq p(\mathbf{x},c)\) for suitable functions \(\tilde{p}_{t}(\mathbf{x}^{t},c)\) of the original \(p\)-values \(p_{t}(\mathbf{x}^{t},c)\). Bonferroni's method provides one such \(p\)-variable, namely \(p^{\text{B}}(\mathbf{x},c)=\min_{t\in\mathcal{T}_{s}}\{|\mathcal{T}_{s}|p_{t}(\mathbf{ x}^{t},c)\}\) with corrected \(p\)-variables \(\tilde{p}_{t}(\mathbf{x}^{t},c)=|\mathcal{T}_{s}|p_{t}(\mathbf{x}^{t},c)\)[45, Appendix 2]. It can be checked that this selection yields the SpikeCP procedure in Algorithm 1. Alternatively, the composite \(p\)-value produced by Simes correction is \(p^{\text{S}}(\mathbf{x},c)=\min_{t\in\mathcal{T}_{s}}\{|\mathcal{T}_{s}|p_{t}(\mathbf{ x}^{t},c)/r(t)\}\), where \(r(t)\) is the ranking of \(p_{t}(\mathbf{x}^{t},c)\) among \(\{p_{t}(\mathbf{x}^{t},c)\}_{t\in\mathcal{T}_{s}}\). This yields corrected \(p\)-variables \(\tilde{p}_{t}(\mathbf{x}^{t},c)=|\mathcal{T}_{s}|p_{t}(\mathbf{x}^{t},c)/r(t)\). Reference [9] proved that this approach provides a valid \(p\)-value as long as the joint distribution over the \(|\mathcal{T}_{s}|\)\(p\)-values \(p_{t}(\mathbf{x}^{t},c)\) have the _multivariate totally positive of order_\(2\) (MTP\({}_{2}\)) property as defined in [9]. Together with the assumption of increasing \(p\)-value, Simes corrected \(p\)-values yield the heuristic SpikeCP variant discussed in Sec. IV-D. Fig. 10: Accuracy and normalized latency as a function of the number of training data \(|\mathcal{D}^{\text{tr}}|\), assuming \(p_{\text{targ}}=0.9\), and \(|\mathcal{D}^{\text{cal}}|=100\) under the same conditions as Fig. 4.
2305.18460
Minimum Width of Leaky-ReLU Neural Networks for Uniform Universal Approximation
The study of universal approximation properties (UAP) for neural networks (NN) has a long history. When the network width is unlimited, only a single hidden layer is sufficient for UAP. In contrast, when the depth is unlimited, the width for UAP needs to be not less than the critical width $w^*_{\min}=\max(d_x,d_y)$, where $d_x$ and $d_y$ are the dimensions of the input and output, respectively. Recently, \cite{cai2022achieve} shows that a leaky-ReLU NN with this critical width can achieve UAP for $L^p$ functions on a compact domain ${K}$, \emph{i.e.,} the UAP for $L^p({K},\mathbb{R}^{d_y})$. This paper examines a uniform UAP for the function class $C({K},\mathbb{R}^{d_y})$ and gives the exact minimum width of the leaky-ReLU NN as $w_{\min}=\max(d_x,d_y)+\Delta (d_x, d_y)$, where $\Delta (d_x, d_y)$ is the additional dimensions for approximating continuous functions with diffeomorphisms via embedding. To obtain this result, we propose a novel lift-flow-discretization approach that shows that the uniform UAP has a deep connection with topological theory.
Li'ang Li, Yifei Duan, Guanghua Ji, Yongqiang Cai
2023-05-29T06:51:16Z
http://arxiv.org/abs/2305.18460v3
# Minimum Width of Leaky-ReLU Neural Networks ###### Abstract The study of universal approximation properties (UAP) for neural networks (NN) has a long history. When the network width is unlimited, only a single hidden layer is sufficient for UAP. In contrast, when the depth is unlimited, the width for UAP needs to be not less than the critical width \(w_{\min}^{*}=\max(d_{x},d_{y})\), where \(d_{x}\) and \(d_{y}\) are the dimensions of the input and output, respectively. Recently, (Cai, 2022) shows that a leaky-ReLU NN with this critical width can achieve UAP for \(L^{p}\) functions on a compact domain \(\mathcal{K}\), _i.e.,_ the UAP for \(L^{p}(\mathcal{K},\mathbb{R}^{d_{y}})\). This paper examines a uniform UAP for the function class \(C(\mathcal{K},\mathbb{R}^{d_{y}})\) and gives the exact minimum width of the leaky-ReLU NN as \(w_{\min}=\max(d_{x}+1,d_{y})+1_{d_{y}=d_{x}+1}\), which involves the effects of the output dimensions. To obtain this result, we propose a novel lift-flow-discretization approach that shows that the uniform UAP has a deep connection with topological theory. Machine Learning, Neural Networks, Uniform Universal Approximation ## 1 Introduction The universal approximation theorem is important for the development of artificial neural networks. Artificial neural networks can approximate functions with arbitrary precision, this fact reveals the great potential of neural networks, and provides important guarantees for its development. (Cybenko, 1989) produces the original universal approximation theorem, stating that an arbitrarily wide feedforward neural network with a single hidden layer and sigmoid activation function can arbitrarily approximate continuous function. (Hornik, 1991) later demonstrated that the key to the universal approximation property lies in the multilayer and neuron architecture rather than the choice of an activation function. Then, (Leshno et al., 1993) show that for a continuous activation function \(f:X\to R^{d_{y}}\) defined on a compact set \(X\subseteq R^{d_{x}}\) can be approximated by a single hidden layer neural network, if and only if, the activation function is a nonpolynomial function. After solving the activation function's theoretical problem, the field of vision naturally shifted to a consideration of the width and depth of the neural network. With the gradual development of deep neural networks, researchers have begun to pay attention to how to theoretically analyze the expressiveness of networks. (Daniely, 2017) simplifies the proof that the expressive ability of the three-layer neural network is superior to that of the two-layer neural network. For any positive integer \(k\), (Telgarsky, 2016) shows that there are neural networks with \(\Theta(k^{3})\) layers and fixed widths that cannot be approximated by networks with \(\mathcal{O}(k)\) layers unless they have \(\Omega(2^{k})\) nodes 1. The universal approximation theorem explains that deep-bounded neural networks with suitable activation functions are universal approximators. (Lu et al., 2017) explained that a neural network with a bounded width can also be a universal approximator, such as the width-\((d_{x}+4)\) ReLU networks, where \(d_{x}\) is the input dimension. (Lu et al., 2017) also shows that a ReLU network of width \(d_{x}\) cannot be used for universal approximation. Footnote 1: \(\Theta(k^{3})\) means that it is bound both above and below by \(k^{3}\) asymptotically; \(\mathcal{O}(k)\) means that it is bounded above by \(k\) asymptotically; \(\Omega(2^{k})\) means that it is bounded below by \(2^{k}\) asymptotically. Many studies, such as (Beise & Da Cruz, 2020; Hanin & Sellke, 2018; Park et al., 2021), have shown that for a narrow neural network (the width is not greater than the input dimension), it is difficult to attain the UAP. (Nguyen et al., 2018) noted that deep neural networks with a specific type of activation function generally need to have a width larger than the input dimension to guarantee that the network can produce disconnected decision regions. For ReLU networks, (Park et al., 2021) proved that the minimum width for \(L^{p}\)-UAP is \(w_{\min}=\max(d_{x}+1,d_{y})\) and summarized the known upper/lower bounds on the minimum width for universal approximation. Furthermore, conclusions related to the UAP of continuous functions have yet to be studied. (Park et al., 2021) and (Cai, 2022) demonstrate the minimum width of some neural networks for \(C\)-UAP using noncontinuous activation functions. If only continuous monotonically increasing activation functions are used, the known minimum width is restricted to the ReLU NN for function class \(\mathcal{C}([0,1],\mathbb{R}^{2})\), where the critical width is \(w_{\min}=3\). Table 1 provides a summary of the known minimum width for UAP. To determine the minimum width of uniform UAP on \(\mathcal{C}(\mathcal{K},\mathbb{R}^{d_{y}})\), we introduce a novel scheme called _lift-flow-discretization approach_. Based on the close relationship between uniform UAP and topology, the functions are embedded in high-dimensional diffeomorphisms, and feedforward neural networks are used to approximate these flow maps. Finally, we determine the minimum width of leaky-ReLU neural networks for \(C\)-UAP on \(\mathcal{C}(\mathcal{K},\mathbb{R}^{d_{y}})\) to be \(w_{\min}=\max(d_{x}+1,d_{y})+1_{d_{y}=d_{x}+1}\). ### Contributions 1. Theorem 2.2 states that the minimum width of leaky-ReLU networks for \(\mathcal{C}(\mathcal{K},\mathbb{R}^{d_{y}})\) is exactly \(\max(d_{x}+1,d_{y})+1_{d_{y}=d_{x}+1}\). This is the first time that the minimum width for the universal approximation of leaky-ReLU networks is fully provided. It is worth mentioning that the previous results for the minimum width for the uniform approximation are based on discontinuous activation functions. The conclusion of this paper is based on continuous activation functions such as the leaky-ReLU function. 2. Section 3 presents a novel approach for approximating continuous functions using a feedforward neural network from the perspective of topology. The lift-flow-discretization approach of combining topology and neural network approximation is the key to the proof in this paper. Our approach is generic for strictly monotone continuous activations, as they all correspond to diffeomorphisms. ### Related work **Width and depth bounds.** Theoretical analyses of the expressive power of neural networks have taken place over the years. (Cybenko, 1989) proposed a prototype of the early classic universal approximation theorem. Continuous univariate functions over bounded domains can be fitted with arbitrary precision using the sigmoid activation function. (Hornik et al., 1989; Leshno et al., 1993; Barron, 1994) obtained similar conclusions and extended them to a large class of activation functions, revealing the relationship between universal approximation and network structure. The effect of neural network width on expressiveness is an enduring question. (Sutskever and Hinton, 2008; Le Roux and Bengio, 2008) and (Montufar, 2014) reveal the impact of depth and width, especially width, on the general approximation of belief networks, and networks with too narrow a width cannot complete the approximation task. The width has important research value for many emerging networks and different activation functions. Conventional conclusions tell us that networks with appropriate activation functions under bounded depths are universal approximators. Correspondingly, (Lu et al., 2017) proposed a general approximation theorem for ReLU networks with bounded widths. (Hanin and Sellke, 2018) also studied in the ReLU network, whose input dimension is \(d_{x}\), hidden layer width is at most \(w\) and depth is not limited. To fit any continuous real-valued function, the minimum value of \(w\) is exactly \(d_{x}+1\). For a deep neural network that satisfies the activation function \(\sigma(\mathbb{R})=\mathbb{R}\), to learn the disconnected regions, it is usually necessary to make the network width larger than the input dimension. If the network is narrow, the paths connecting the disconnected regions yield high-confidence predictions (Nguyen et al., 2018). (Chong, 2020) gives a direct algebraic proof of the universal approximation theorem, and (Beise et al., 2021) reveals the fundamental reason why the universal approximation of network functions with width \(w\leq d_{x}\) from \(\mathbb{R}^{d_{x}}\) to \(\mathbb{R}\) is impossible. (Park et al., 2021) gives the first definitive results for the \begin{table} \begin{tabular}{c c c c} \hline \hline References & Functions & Activation & Minimum width \\ \hline (Hanin and Sellke, 2018) & \(\mathcal{C}(\mathcal{K},\mathbb{R})\) & ReLU & \(w_{\min}=d_{x}+1\) \\ \hline (Park et al., 2021) & \(L^{p}(\mathbb{R}^{d_{x}},\mathbb{R}^{d_{y}})\) & ReLU & \(w_{\min}=\max(d_{x}+1,d_{y})\) \\ (Park et al., 2021) & \(\mathcal{C}([0,1],\mathbb{R}^{2})\) & ReLU & \(w_{min}=3\) \\ (Park et al., 2021) & \(\mathcal{C}(\mathcal{K},\mathbb{R}^{d_{y}})\) & ReLU + STEP & \(w_{\min}=\max(d_{x}+1,d_{y})\) \\ \hline (Cai, 2022) & \(D^{p}(\mathcal{K},\mathbb{R}^{d_{y}})\) & Leaky-ReLU & \(w_{\min}=\max(d_{x},d_{y},2)\) \\ (Cai, 2022) & \(\mathcal{C}(\mathcal{K},\mathbb{R}^{d_{y}})\) & ReLU + FLOOR & \(w_{\min}=\max(d_{x},d_{y},2)\) \\ \hline Ours (Theorem 2.2) & \(\mathcal{C}(\mathcal{K},\mathbb{R}^{d_{y}})\) & Leaky-ReLU & \(w_{\min}=\max(d_{x}+1,d_{y})+1_{d_{y}=d_{x}+1}\) \\ Ours (Lemma 2.4) & \(\mathcal{C}(\mathcal{K},\mathbb{R}^{d_{x}})\) & Leaky-ReLU & \(w_{\min}=d_{x}+1\) \\ \hline \hline \end{tabular} \(\dagger\)\(d_{x}\) and \(d_{y}\) are the input and output dimensions, respectively. \(\mathcal{K}\subset\mathbb{R}^{d_{x}}\) is a compact domain and \(p\in[1,\infty)\). \end{table} Table 1: A summary of the known minimum width of feed-forward neural networks for universal approximation. \({}^{\dagger}\) critical width enabling the universal approximation of width-bounded networks. The minimum width for the \(L^{p}\) functions is \(\max(d_{x}+1,d_{y})\) using the ReLU activation functions. (Park et al., 2021) also shows that this conclusion is unsuitable for the uniform approximation of the ReLU network, but it still holds using the ReLU+STEP activation function. (Cai, 2022) shows that minimum widths for the \(C\)-UAP and \(L^{p}\)-UAP on compact domains have a universal lower bound \(w_{min}=\max(d_{x},d_{y})\). (Cai, 2022) also shows the minimum width for the uniform approximation with some additional threshold activation functions. **Homeomorphism properties of networks.** Residual networks (ResNets) are an advanced deep learning architecture for supervised learning problems. (Rousseau & Fablet, 2018) shows that a continuous flow of diffeomorphisms governed by ordinary differential equations can be numerically implemented using the mapping component of ResNets. Neural ordinary differential equations (neural ODEs) turn the neural network training problem into a problem of solving differential equations and can make the discrete ResNet continuous. As a deep learning method, (Teshima et al., 2020) shows the universality of discrete neural ODEs with the condition that the source vectors \(f_{i}(z)\in\mathcal{H}\), where \(\mathcal{H}\) is a universal approximator for the Lipschitz functions. (Ruiz-Balet & Zuazua, 2021) provide \(L^{2}\)-UAP for neural ODE \(\dot{x}=W\sigma(Ax+b)\). (Zhang et al., 2019) shows that neural ODEs with extra dimensions are universal approximators for homeomorphisms. Invertible neural networks have diffeomorphic properties, and many flow models can also be used as universal approximators. (Huang et al., 2018) shows that neural autoregressive flows are universal approximators for continuous probability distributions. (Teshima et al., 2020) indicates that normalizing flow models based on affine coupling also have UAP. (Kong & Chaudhuri, 2021) shows that residual flows are universal approximators in maximum mean discrepancies. ### Organization We first define the necessary notation and the main results and give the proof ideas in Section 2. In Section 3, we present our _lift-flow-discretization approach_ demonstrating the minimum width to achieve \(C\)-UAP. The detailed proof process is given in Section 4. Considering the influence of the output dimension, the final proof is divided into four parts. In Section 5, we give an outlook on the direction of our current work. All formal proofs are provided in the appendix. ## 2 Main results We consider the standard feedforward neural network with the same number of neurons at each hidden layer. We say a \(\sigma\)-NN with depth \(L\) is a function with inputs \(x\in\mathbb{R}^{d_{x}}\) and outputs \(y\in\mathbb{R}^{d_{y}}\), which has the following form: \[y =f_{NN,L}(x)=y_{L} \tag{1}\] \[=W_{L+1}\sigma\left(W_{L}\left(\cdots\sigma\left(W_{1}x+b_{1} \right)+\cdots\right)+b_{L}\right)+b_{L+1},\] where \(b_{i}\) are vectors, \(W_{i}\) are matrices and \(\sigma(\cdot)\) is the activation function. We mainly consider the number of neurons in all the layers to be the same \(N\). In this case, \(W_{i}\in\mathbb{R}^{N\times N},b_{i}\in\mathbb{R}^{N},i\in\{1,\cdots,L+1\}\), except \(W_{1}\in\mathbb{R}^{N\times d_{x}}\), \(W_{L+1}\in\mathbb{R}^{d_{y}\times N}\) and \(b_{L+1}\in\mathbb{R}^{d_{y}}\). We denote the set of all networks in Eq. (1) as \(\mathcal{N}_{N,L}(\sigma)\), and \(\mathcal{N}_{N}(\sigma)=\bigcup_{L}\mathcal{N}_{N,L}(\sigma)\). The activation function is crucial for the approximation power of the neural network. Our main results are for the following leaky-ReLU activations function with a fixed parameter \(\alpha\in\mathbb{R}^{+}\setminus\{1\}\), \[\sigma(x)=\sigma_{\alpha}(x)=\begin{cases}x,&x>0,\\ \alpha x,&x\leq 0.\end{cases} \tag{2}\] ### Main theorem Our main theorem is the following Theorem 2.2, which provides the exact minimum width of the leaky-ReLU networks that process uniform universal approximations. **Definition 2.1**.: We say the leaky-ReLU networks with width \(N\) have \(C\)-UAP or \(L^{p}\)-UAP if the set \(\mathcal{N}_{N}(\sigma)\) is dense in \(C(\mathcal{K},\mathbb{R}^{d_{y}})\) or \(L^{p}(\mathcal{K},\mathbb{R}^{d_{y}})\), respectively. **Theorem 2.2**.: _Let \(\mathcal{K}\subset\mathbb{R}^{d_{x}}\) be a compact set; then, for the continuous function class \(C(\mathcal{K},\mathbb{R}^{d_{y}})\), the minimum width \(w_{\min}\) of leaky-ReLU neural networks having \(C\)-UAP is exactly \(w_{\min}=\max(d_{x}+1,d_{y})+1_{d_{y}=d_{x}+1}\). Thus, \(\mathcal{N}_{N}(\sigma)\) is dense in \(C(\mathcal{K},\mathbb{R}^{d_{y}})\) if and only if \(N\geq w_{\min}\)._ Before giving the proof, let's emphasize the points of Theorem 2.2. First, if the width \(N\) of the leaky-ReLU networks is smaller than \(w_{\min}\), then there is a continuous function \(f^{*}\in C(\mathcal{K},\mathbb{R}^{d_{y}})\) that cannot be well approximated, _i.e._ there is a positive constant \(\varepsilon>0\) such that \(\|f-f^{*}\|>\varepsilon\) for all \(f\in\mathcal{N}_{N}(\sigma)\). For the case of \(\mathcal{K}=[-1,1]^{d_{x}},d_{y}=1\), the function \(f^{*}\) can be chosen as \(f^{*}(x)=\|x\|^{2}\). The reason will be given in the next section, which is based on the topological properties of the level sets (Johnson, 2019). Second, if \(N=w_{\min}\), then for any \(f^{*}\in C(\mathcal{K},\mathbb{R}^{d_{y}})\) and any \(\varepsilon>0\), we can construct a leaky-ReLU network \(f_{L}\) with width \(N\) and depth \(L\) such that \(\|f-f^{*}\|<\varepsilon\). We will introduce the construction scheme later. Lastly, the formula of \(w_{\min}\) includes a characteristic func tion \(1_{d_{y}=d_{x}+1}\), \[1_{d_{y}=d_{x}+1}=\begin{cases}1,&d_{y}=d_{x}+1,\\ 0,&d_{y}\neq d_{x}+1,\end{cases}\] which indicates that there is an obstacle of dimension. In fact, this is caused by the topology of the manifolds. ### Proof ideas Now, we provide the proof scheme, while the details will be given in the next section. As illustrated in Figure 1, the result of Theorem 2.2 is split into four parts: Part 1 and Part 2 give a lower bound and an upper bound for the general dimensions, Part 3 considers the exceptional case of \(d_{y}=d_{x}+1\), and Part 4 considers the case of \(d_{y}\geq d_{x}+2\). Part 1 is based on the following lemma, which results in a lower bound of leaky-ReLU network to be \(w_{\min}\geq\max(d_{x}+1,d_{y})\). **Lemma 2.3**.: _For any compact domain \(\mathcal{K}\subset\mathbb{R}^{d_{x}}\), the leaky-ReLU networks with width \(N<\max(d_{x}+1,d_{y})\) do not have UAP for \(C(\mathcal{K},\mathbb{R}^{d_{y}})\), i.e. \(\mathcal{N}_{N}(\sigma)\) is not dense in \(C(\mathcal{K},\mathbb{R}^{d_{y}})\)._ The proof is based on the result of (Cai, 2022), which shows a universal lower bound \(w_{\min}\geq\max(d_{x},d_{y})\) for arbitrary activations, and (Johnson, 2019), which shows that \(w_{\min}\geq d_{x}+1\) for monotone and continuous activations such as leaky-ReLU is sufficient. Part 2 is based on the following lemma, which considers the case of \(d_{x}=d_{y}=d\). If \(d_{x}\) and \(d_{y}\) are not the same, we can lift them to dimension \(d=\max(d_{x},d_{y})\) by filling in zeros for the auxiliary dimensions. **Lemma 2.4**.: _For any continuous function \(f^{*}\in C(\mathcal{K},\mathbb{R}^{d})\) on compact domain \(\mathcal{K}\subset\mathbb{R}^{d}\), and \(\varepsilon>0\), there is a leaky-ReLU network \(f_{L}(x)\) with depth \(L\) and width \(d+1\) such that \(\|f_{L}(x)-f^{*}(x)\|\leq\varepsilon\) for all \(x\) in \(\mathcal{K}\)._ Lemma 2.4 is our main result for Part 2, which shows that leaky-ReLU neural network with width \(d+1\) has enough expressive power to approximate continuous function \(f^{*}\) with \(d_{x}=d_{y}=d\). The proof of Lemma 2.4 will be given in Section 3 as it is based on our lift-flow-discretization approach given in the next section. The gap between the lower bound \(w_{\min}\geq\max(d_{x}+1,d_{y})\) and the upper bound \(w_{\min}\leq\max(d_{x}+1,d_{y}+1)\) is at most one. When \(d_{y}\leq d_{x}\), it directly implies that \(w_{\min}=\max(d_{x}+1,d_{y})\). Then, we consider the case of \(d_{y}\geq d_{x}+1\). In this case, the lower and upper bounds read \(d_{y}\leq w_{\min}\leq d_{y}+1\), and the question is whether width \(N=d_{y}\) is sufficient for \(C\)-UAP. Part 3 and Part 4 of our main result answer the question by showing that width \(d_{y}\) is enough for the case of \(d_{y}\geq d_{x}+2\) but not for the case of \(d_{y}=d_{x}+1\). The two cases are heavily related to topology theory. Here, we give a short example to show this phenomenon. Let \(f^{*}\in C([0,1],\mathbb{R}^{2})\) be a parameterized curve shaped like '4', which has a self intersecting point. Then, \(f^{*}\) cannot be approximated by curve homeomorphic to a line segment. However, if \(f^{*}\in C([0,1],\mathbb{R}^{3})\), the approximation is possible according to our lift-flow-discretization approach. We will show this example in detail in Section 4. ## 3 Lift-flow-discretization approach Before presenting our proof of Part 3 and Part 4, we will first provide our key approach for proving Part 3 and Part 4 in Figure 1 which is called the lift-flow-discretization approach in this section. We reformulate network (1) as follows: \[f_{L}(x)=W_{L+1}\Phi_{L}(W_{1}x+b_{1})+b_{L+1}, \tag{3}\] where \(\Phi_{L}\) is a map from \(\mathbb{R}^{N}\) to \(\mathbb{R}^{N}\) and \(W_{1}x+b_{1}\) and \(W_{L+1}\Phi+b_{L+1}\) are linear maps. Since we use the leaky-ReLU activation and the weight matrix in (1), it can be Figure 1: Minimum width of leaky-ReLU networks for universal approximation. (a) Example of function from \(\mathcal{K}\subset\mathbb{R}^{d_{x}}\) to \(\mathbb{R}^{d_{y}}\). (b) Feedforward neural networks with depth \(L\) and width \(N\). (c) The minimum width of leaky-ReLU networks to reach UAP. (d) Proof parts of the main result. assumed to be nonsingular, the map \(\Phi_{L}\) is a homeomorphism. Motivated by the recent work of (Duan et al., 2022), which shows that leaky-ReLU networks can approximate flow maps, we propose an approach to approximate functions \(f^{*}\) in \(C(\mathcal{K},\mathbb{R}^{d_{y}})\) by lifting it as a diffeomorphism \(\Phi\) and then we approximate \(\Phi\) by flow maps and neural networks. For any function \(f^{*}\) in \(C(\mathcal{K},\mathbb{R}^{d_{y}})\) and any \(\varepsilon>0\), our lift-flow-discretization approach includes three parts: 1. **(Lift)** A lift map \(\Phi\in C(\mathbb{R}^{N},\mathbb{R}^{N})\), which is an orientation preserving (OP) diffeomorphism such that \[\|f^{*}(x)-\beta\circ\Phi\circ\alpha(x)\|\leq\varepsilon/3,\quad\forall x\in \mathcal{K},\] (4) where \(\alpha\) and \(\beta\) are two linear maps. Without loss of generality, we can assume that the Lipschitz constants of \(\alpha\) and \(\beta\) are less than one. Within this notation, we say the map \(\Phi\) is a lift of \(f^{*}\). 2. **(Flow)** A flow map \(\phi^{\tau}\in C(\mathbb{R}^{N},\mathbb{R}^{N})\) corresponding to a neural ODE \[z^{\prime}(t)=v(z(t),t),t\in(0,\tau),\quad z(0)=x,\] (5) which satisfies \(\|\Phi(x)-\phi^{\tau}(x)\|\leq\varepsilon/3\) for all \(x\) in \(\alpha(\mathcal{K})\). 3. **(Discretization)** A discretization map \(\psi\in C(\mathbb{R}^{N},\mathbb{R}^{N})\) is a leaky-ReLU network in \(\mathcal{N}_{N}(\sigma)\) that approximates \(\phi^{\tau}\) such that \(\|\psi(x)-\phi^{\tau}(x)\|\leq\varepsilon/3\) for all \(x\) in \(\alpha(\mathcal{K})\). As a result, the composition \(\beta\circ\psi\circ\alpha=:f_{L}\) is a leaky-ReLU network with width \(N\), which approximates the target function \(f^{*}\) such that \[\|f^{*}(x)-\beta\circ\psi\circ\alpha(x)\|\leq\varepsilon,\quad\forall x\in \mathcal{K}. \tag{6}\] ### Theory of the lift-flow-discretization approach Note that the existence of \(\phi^{\tau}\) and \(\psi\) are guaranteed by the following lemmas based on the results of (Caponigro, 2011) and (Duan et al., 2022). We need to construct the lift map \(\Phi\), which will be constructed case by case. **Lemma 3.1**.: _Let \(\Phi\) be an orientation preserving diffeomorphism of \(\mathbb{R}^{N}\), \(\Omega\) be a compact set in \(\mathbb{R}^{N}\) and \(\varepsilon>0\). Then, there is an ODE with tanh neural fields, whose flow map is denoted by \(\phi^{\tau}(x_{0})=z(\tau)\),_ \[\dot{x}(t) =v(x(t),t) \tag{7}\] \[\equiv\sum_{i=1}^{M}a_{i}(t)\tanh(w_{i}(t)\cdot x(t)+b_{i}(t)),t \in[0,\tau],\] \[x(0) =x_{0}\in\mathbb{R}^{N},\quad M\in\mathbb{Z}^{+},\] _where \(a_{i},w_{i}\in\mathbb{R}^{N}\) and \(b_{i}\in\mathbb{R}\) are piecewise constant functions of \(t\), such that \(\|\phi^{\tau}(x_{0})-\Phi(x_{0})\|<\varepsilon\) for all \(x_{0}\) in \(\Omega\)._ Lemma 3.1 ensures Step 2 ('flow') of our lift-flow-discretization approach, where we use the flow map of a neural ODE to approximate a given orientation preserving diffeomorphism. The formal proof of the lemma can be seen in the appendix. Here, we provide the main idea of the proof. First, we refer to (Caponigro, 2011) to prove that for any \(\varepsilon>0\), there exists a flow map at the endpoint of time \(\tilde{\phi}^{\tau}(x)\) of an ODE such that \(\|\tilde{\phi}^{\tau}(x)-\Phi(x)\|<\varepsilon/2\) for all \(x\in\alpha(\mathcal{K})\), then we use neural ODE (7) to approximate \(\tilde{\phi}^{\tau}(x)\), there exist \((a,w,b)\) such that the flow map (denoted as \(\phi^{\tau}(x)\)) of Eq. (7) satisfies \(\|\phi^{\tau}(x)-\tilde{\phi}^{\tau}(x)\|<\varepsilon/2\), then \(\|\phi^{\tau}(x)-\Phi(x)\|<\varepsilon\). **Lemma 3.2**.: _Let \(\phi^{\tau}\in C(\mathbb{R}^{N},\mathbb{R}^{N})\) be the flow map in Lemma 3.1 and \(\Omega\) be a compact set in \(\mathbb{R}^{N}\) and \(\varepsilon>0\). Then, there is a leaky-ReLU network \(\psi\in\mathcal{N}_{N}(\sigma)\) with width \(N\) and depth \(L\) such that \(\|\phi^{\tau}(x_{0})-\psi(x_{0})\|<\varepsilon\) for all \(x_{0}\) in \(\Omega\)._ This lemma ensures Step 3 (discretization) of our lift-flow-discretization approach, where we find a neural network to approximate \(\phi^{\tau}\) in Step 2. The formal proof, motivated by (Duan et al., 2022), can be seen in the appendix. The main idea is to solve the ODE (7) by a splitting method and then approximate each split step by leaky-ReLU networks. Consider the following splitting for \(v\) in (7), \(v(x,t)\equiv\sum_{i=1}^{N}\sum_{j=1}^{d}v_{ij}(x,t)e_{j}\) with \(v_{ij}(x,t)=a_{i}^{(j)}(t)\tanh(w_{i}(t)\cdot x+b_{i}(t))\in\mathbb{R}\). Then, the flow map can be approximated by an iteration with time step \(\Delta t=\tau/n\), \(n\in\mathbb{Z}^{+}\) large enough, \[\phi^{\tau}(x_{0})\approx x_{n} =T_{n}(x_{n-1})=T_{n}\circ\cdots\circ T_{1}(x_{0}),\] \[=T_{n}^{(N,d)}\circ\cdots\circ T_{n}^{(1,2)}\circ T_{n}^{(1,1)} \circ\cdots\circ\] \[T_{1}^{(N,d)}\circ\cdots\circ T_{1}^{(1,2)}\circ T_{1}^{(1,1)}(x_ {0}).\] where the \(k\)-th iteration is \(x_{k+1}=T_{k}x_{k}=T_{k}^{(N,d)}\circ\cdots\circ\) Figure 2: Sketch of the lift-flow-discretization approach. The target map \(\Phi(x)\) is approximated by a flow map \(\tilde{\phi}^{\tau}(x)\) of an ODE, which is further approximated by a flow map \(\phi^{\tau}(x)\) of a neural ODE (7). \(T_{k}^{(1,2)}\circ T_{k}^{(1,1)}(x_{k})\), The map \(T_{k}^{(i,j)}:x\to y\) in each split step is: \[T_{k}^{(i,j)}:\begin{cases}y^{(l)}=x^{(l)},l=1,2,..,j-1,j+1,...,d,\\ y^{(j)}=x^{(j)}+\Delta tv_{ij}(x,k\Delta t).\end{cases}\] Combining all the approximation networks, we have \(\|\phi^{\tau}(x_{0})-\psi(x_{0})\|<\varepsilon\) for all \(x_{0}\in\Omega\). Having reached the above conclusion, if the lift map \(\Phi\) in Step 1 (lift) is constructed, we can obtain the following corollary. **Corollary 3.3**.: _Let \(f^{*}\in C(\mathcal{K},\mathbb{R}^{d})(\mathcal{K}\subset\mathbb{R}^{d})\) and \(N\in\mathbb{Z}^{+}\). If for any \(\varepsilon>0\), there is an orientation preserving diffeomorphism \(\Phi\) of \(\mathbb{R}^{N}\) and two linear maps \(\alpha\) and \(\beta\) such that \(\|f^{*}(x)-\beta\circ\Phi\circ\alpha(x)\|<\varepsilon\) for all \(x\in\mathcal{K}\), then there is a leaky-ReLU network \(f_{L}\in\mathcal{N}_{N}(\sigma)\) with width \(N\) and depth \(L\) such that \(\|f_{L}(x)-f^{*}(x)\|<\varepsilon\) for all \(x\in\mathcal{K}\)._ This corollary shows the expressive power of the leaky-ReLU neural network, and it is our main result for Part 3 and Part 4, in Section 4, we will show that it holds for Part 4 (\(d_{y}\geq d_{x}+2\)) while failing for Part 3 (\(d_{y}=d_{x}+1\)). ### Proof idea of Lemma 2.4 We can prove Lemma 2.4 by designing a proper lift map. According to the lift-flow-discretization approach, we only need to construct two linear maps and an orientation preserving diffeomorphism with \(N=d+1\), which satisfies the condition of Corollary 3.3. For any continuous function \(f^{*}\in C(\mathcal{K},\mathbb{R}^{d})\) on compact domain \(\mathcal{K}\subset\mathbb{R}^{d}\), and \(\varepsilon>0\), there is a locally Lipschitz smooth function \(p\) such that \(\|f^{*}(x)-p(x)\|<\varepsilon\) for all \(x\in\mathcal{K}\). The function \(p\) can be chosen as a polynomial according to the well-known Stone-Weierstrass theorem. Then, the maps \(\alpha:\mathbb{R}^{d}\to\mathbb{R}^{d+1},\beta:\mathbb{R}^{d+1}\to\mathbb{R}^ {d}\) and \(\Phi:\mathbb{R}^{d+1}\to\mathbb{R}^{d+1}\) can be chosen as follows: \[\alpha(x)=\begin{pmatrix}I_{d}\\ \mathbf{1}^{T}\end{pmatrix}x,\quad\Phi\begin{pmatrix}x\\ x_{d+1}\end{pmatrix}=\begin{pmatrix}p(x)+\kappa\mathbf{1}\mathbf{1}^{T}\\ x_{d+1}\end{pmatrix}, \tag{8}\] \[\beta\begin{pmatrix}x\\ x_{d+1}\end{pmatrix}=(I,-\kappa\mathbf{1})\begin{pmatrix}x\\ x_{d+1}\end{pmatrix}=x-\kappa\mathbf{1}x_{d+1}, \tag{9}\] where \(\mathbf{1}\in\mathbb{R}^{d}\) is a column vector with all being elements one, \(\kappa\) is a number larger than the Lipschitz constant of \(p\) on \(\mathcal{K}\), and \(x_{d+1}\) is the coordinate of the auxiliary dimension. It is obvious that the maps \(\alpha\) and \(\beta\) are linear and \(p(x)=\beta\circ\Phi\circ\alpha(x)\). Our proof is constructive, and the formal proof is in appendix. Our 'lift-flow-discretization' approach deeply connects the minimal width to topology theory, providing that the activation is a one-dimensional diffeomorphism. ## 4 Effect of the output dimension Now we turn to Part 3 and Part 4 of the main results which consider the case of \(d_{y}\geq d_{x}+1\). We examine the approximation power of leaky-ReLU networks with width \(N=d_{y}\). We emphasize the homeomorphism properties. In fact, leaky-ReLU, the nonsingular linear transformer and their inverse are continuous and homeomorphic. Since compositions of homeomorphism are also homeomorphism, we have the input-output map as a homeomorphism. Note that a singular matrix can be approximated by nonsingular matrixes, therefore we can restrict the weight matrix in neural networks as nonsingular. When \(d_{y}>d_{x}\), we can reformulate the leaky-ReLU network with width \(N=d_{y}\) as \(f_{L}(x)=\psi(W_{1}x+b_{1}),W_{1}\in\mathbb{R}^{d_{x}\times d_{y}},b\in\mathbb{ R}^{d_{y}}\), where \(\psi(\cdot)\) is a homeomorphism in dimension \(d_{y}\). ### The particular dimension \(d_{y}=d_{x}+1\) The following lemma shows that width \(N=d_{y}\) is not enough for \(C\)-UAP which implies that \(w_{\min}\geq d_{y}+1\) when \(d_{y}=d_{x}+1\). **Lemma 4.1**.: _If \(d_{y}=d_{x}+1\), there exists a continuous function \(f^{*}(x)\in C(\mathcal{K},\mathbb{R}^{d_{y}})\) which can NOT be uniformly approximated by functions like \(\psi(W_{1}x+b_{1})\) with homeomorphism maps \(\psi:\mathbb{R}^{d_{y}}\to\mathbb{R}^{d_{y}}\)._ In order to prove this lemma, the counterexample (Figure 4(a)) we constructed seems very intuitive. In the counterexample, we need to prove that given target function \(g(t):\mathbb{R}\to\mathbb{R}^{2}\), for a sufficiently small \(\varepsilon>0\), \(h\) satisfying \(\|g(t)-h(t)\|<\varepsilon\) for any \(t\in[0,\tau]\) has a self-intersection point, we just need to prove that the curve starting from \((0,1)\) to \((0,1)\) and the curve starting from \((-1,0)\) to \((1,0)\) within \([0,1]\times[0,1]\) have at least one intersection within \([0,1]\times[0,1]\). This conclusion is so intuitive that even non-mathematics can also see at a glance that the two curves must have intersections. While the proof may seem complicated because we need knowledge of topology. Interested readers can refer to the proof in the appendix. ### The case of \(d_{y}\geq d_{x}+2\) Note that we only need to consider the case of \(d_{y}=d_{x}+2\). The reason is that when \(d_{y}>d_{x}+2\), we can increase \(d_{x}\) by adding some auxiliary dimensions to the input. Then, employing the lift-flow-discretization approach, we only need to show that any \(f^{*}\in C(\mathcal{K},\mathbb{R}^{d_{x}+2})\) can be approximated by functions formulated as \(\psi(Wx)\), where \(\psi(\cdot)\) is an orientation preserving diffeomorphism in dimension \(d_{x}+2\). **Lemma 4.2**.: _For any \(f^{*}\in C(\mathcal{K},\mathbb{R}^{d_{x}+2})\) and \(\varepsilon>0\), there is a matrix \(W\in\mathbb{R}^{(d_{x}+2)\times d_{x}}\) and an OP diffeomorphism map \(\Phi\) such that \(\|\Phi(Wx)-f^{*}(x)\|<\varepsilon\) for all \(x\) in \(\mathcal{K}\)._ Here, we provide the main ideas. Let \(f^{*}:I^{d}:=[0,1]^{d}\rightarrow\mathbb{R}^{d+2}\) be a continuous map. Then, let \(f^{*}\) be the locally smooth homeomorphism and the transversal intersection at all the self-intersection points. Denote the set of self-intersection points of f as \(\mathcal{A}\). We can prove that \(\mathcal{A}\) is a compactly closed subset. For \(\forall x\in\mathcal{A}\), there is an open neighborhood \(U\) of \(x\); then, we can make perturbations in \(U\) to make the maps disjoint and not create new intersections. Thus, we obtain a smooth approximation \(f\) of \(f^{*}\) without intersections, which is the desired diffeomorphism approximation. Because \(f\) is the embedding of \(I^{d}\rightarrow\mathbb{R}^{d+2}\), it can be written as the composition of linear mapping and differential homeomorphism, which is \(f(x)=\Phi(Wx),W\in\mathbb{R}^{(d_{x}+2)\times d_{x}}\). According to the lift and flow steps, there exists a flow map \(\phi^{\tau}\in\mathcal{C}(\mathbb{R}^{d_{x}+2},\mathbb{R}^{d_{x}+2})\) satisfying \(\|\phi^{\tau}(x)-f(x)\|<\varepsilon/2\). According to the discretization step, there exists a leaky-ReLU network \(\psi\in\mathcal{C}(\mathbb{R}^{d_{x}+2},\mathbb{R}^{d_{x}+2})\) that satisfies \(\|\psi(x)-\phi^{\tau}(x)\|<\varepsilon/2\). By employing lift-flow-discretization approach, we can arrive at the desired result. To understand the result, we show an example of the case of \(d_{y}\geq d_{x}+2\). We have shown that a continuous function \(f^{*}\) from \([0,1]^{d_{x}}\rightarrow\mathbb{R}^{d_{x}+2}\) can be uniformly approximated by a diffeomorphism, we take a '4'-shape curve corresponding to a continuous function \(f^{*}(t)\) from \([0,1]\rightarrow\mathbb{R}^{3}\) as an example. From Figure 3, we lift the four vertices of the '4'-shape curve as \((-1,0,0),(1,0,0),(0,-1,0),(0,1,0)\), and connect them in turn to form a polyline, which is our target function \(f^{*}(t),t\in[0,1]\). Then, we construct a curve without self-intersection points by changing one of the \(z\)-axis coordinates of the points to \(\varepsilon\) (such as \(\varepsilon=0.1\)), the approximation function \(f\) is the curve connected by 4 vertices as \((-1,0,\varepsilon),(1,0,0),(0,-1,0),(0,1,0)\) in sequence, in Figure 3(b). Now the approximation function becomes a curve \(\tilde{f}(t)\) without self-intersecting points, corresponding to a homeomorphic mapping \(\Phi\) in \(\mathbb{R}^{3}\) with \(\tilde{f}(t)=\Phi(wt)\) for some \(w\in\mathbb{R}^{3}\). Employing the flow and discretization steps, we can approximate \(\Phi\) by leaky-ReLU networks. Consequently, we can conclude that the '4'-shape curve in Figure 3 can be approximated by leaky-ReLU networks. ## 5 Discussion **General Activation**. It should be noted that our 'lift-flow-discretization' approach is generic for strictly monotone continuous activations. For example, our results are valid for strict monotone piecewise linear activations. We focus on leaky-ReLU networks mainly because 1) it is the simplest demo to prove our concept, and 2) the results of (Caponigro, 2011) and (Duan et al., 2022) allow us to finish the 'flow' and 'discretization' steps easily. We also note that our result may not hold for ReLU networks as the ReLU function is not invertible. ReLU networks can be regarded as the limits of leaky-ReLU networks with parameter \(\alpha\) tending to \(0\). However, in our construction, some weights of the network are \(O(1/\alpha)\), which tend to \(\infty\) as \(\alpha\to 0\). This suggests that the narrow ReLU and leaky-ReLU networks are different. How to rediscuss these issues under the ReLU network, and show the differences between the two more clearly, maybe a very interesting topic. \(L^{p}\)**-UAP and \(C\)-UAP**. Leaky-ReLU activation has been studied by (Duan et al., 2022) and (Cai, 2022) to connect neural ODEs, flow maps, and the minimum width of neural networks. However, the previous results are for the UAP under the \(L^{p}\) norm, which simplified the analysis because the diffeomorphisms are \(L^{p}\) approximations of maps (Brenier and Gangbo, 2003). Our 'lift-flow-discretization' approach can deeply connect the minimal width to the topology theory, properly lift the target function to higher dimensions and employ facts from topology theory to further obtain sharp bounds of width for the uniform/\(C\)-UAP. **Approximation rate**. Determining the number of weights or layers to achieve \(\varepsilon\) approximation error is related to the approximation rate or the error-bound problems. Estimating the error bound of all three steps in our lift-flow-discretization approach is challenging since the error in the 'flow' step is hard to estimate. This may need to establish new construction tools. We leave it as future work. ## Acknowledgments We are grateful to Mr. Ziyi Lei for the helpful discussion on the proof of Lemma 4.2. We are also grateful to the anonymous reviewers for their useful comments and suggestions. This work was supported by the National Natural Science Foundation of China (Grant No. 12201053) and the National Natural Science Foundation of China (Grant No. 11871105). Figure 3: Example of \(d_{x}=1\). Approximate the ‘4’-shape curve (a) in \(\mathbb{R}^{2}\) by lifting it to the three-dimensional curve (b) in \(\mathbb{R}^{3}\).
2304.01230
SEENN: Towards Temporal Spiking Early-Exit Neural Networks
Spiking Neural Networks (SNNs) have recently become more popular as a biologically plausible substitute for traditional Artificial Neural Networks (ANNs). SNNs are cost-efficient and deployment-friendly because they process input in both spatial and temporal manner using binary spikes. However, we observe that the information capacity in SNNs is affected by the number of timesteps, leading to an accuracy-efficiency tradeoff. In this work, we study a fine-grained adjustment of the number of timesteps in SNNs. Specifically, we treat the number of timesteps as a variable conditioned on different input samples to reduce redundant timesteps for certain data. We call our method Spiking Early-Exit Neural Networks (SEENNs). To determine the appropriate number of timesteps, we propose SEENN-I which uses a confidence score thresholding to filter out the uncertain predictions, and SEENN-II which determines the number of timesteps by reinforcement learning. Moreover, we demonstrate that SEENN is compatible with both the directly trained SNN and the ANN-SNN conversion. By dynamically adjusting the number of timesteps, our SEENN achieves a remarkable reduction in the average number of timesteps during inference. For example, our SEENN-II ResNet-19 can achieve 96.1% accuracy with an average of 1.08 timesteps on the CIFAR-10 test dataset. Code is shared at https://github.com/Intelligent-Computing-Lab-Yale/SEENN.
Yuhang Li, Tamar Geller, Youngeun Kim, Priyadarshini Panda
2023-04-02T15:57:09Z
http://arxiv.org/abs/2304.01230v2
# SEENN: Towards Temporal Spiking Early-Exit Neural Networks ###### Abstract Spiking Neural Networks (SNNs) have recently become more popular as a biologically plausible substitute for traditional Artificial Neural Networks (ANNs). SNNs are cost-efficient and deployment-friendly because they process input in both spatial and temporal manners using binary spikes. However, we observe that the information capacity in SNNs is affected by the number of timesteps, leading to an accuracy-efficiency tradeoff. In this work, we study a fine-grained adjustment of the number of timesteps in SNNs. Specifically, we treat the number of timesteps as a variable conditioned on different input samples to reduce redundant timesteps for certain data. We call our method **S**piking **E**arly-**E**xit **N**eural **N**etworks (**SEENNs**). To determine the appropriate number of timesteps, we propose SEENN-I which uses a confidence score thresholding to filter out the uncertain predictions, and SEENN-II which determines the number of timesteps by reinforcement learning. Moreover, we demonstrate that SEENN is compatible with both the directly trained SNN and the ANN-SNN conversion. By dynamically adjusting the number of timesteps, our SEENN achieves a remarkable reduction in the average number of timesteps during inference. For example, our SEENN-II ResNet-19 can achieve **96.1**% accuracy with an average of **1.08** timesteps on the CIFAR-10 test dataset. Machine Learning, ICML ## 1 Introduction Deep learning has revolutionized a range of computational tasks such as computer vision and natural language processing (LeCun et al., 2015) using Artificial Neural Networks (ANNs). These successes, however, have come at the cost of tremendous computational demands and high latency (Han et al., 2015). In recent years, Spiking Neural Networks (SNNs) have gained traction as an energy-efficient alternative to ANNs (Roy et al., 2019; Tavanaei et al., 2019). SNNs infer inputs across a number of timesteps as opposed to ANNs, which infer over what is essentially a single timestep. Moreover, during each timestep, the neuron in an SNN either fires a spike or remains silent, thus making the output of the SNN neuron binary and sparse. Such spike-based computing produces calculations that substitute multiplications with additions. In the field of SNN research, there are two main approaches to getting an SNN: (1) direct training SNNs from scratch and (2) converting ANNs to SNNs. Direct training seeks to optimize an SNN using methods such as spiking timing-based plasticity (Iyer and Chua, 2020) or surrogate gradient-based optimization (Wu et al., 2018; Shrestha and Orchard, 2018). In contrast, the ANN-SNN conversion approach (Diehl et al., 2015; Rueckauer et al., 2016; Sengupta et al., 2018; Han and Roy, 2020; Deng and Gu, 2021; Li et al., 2021) uses the feature representation of a pre-trained ANN and aims to replicate it in the corresponding SNN. Both methods have the potential to achieve high-performance SNNs when implemented correctly. Despite the different approaches, both training-based and conversion-based SNNs are limited by binary activations. As a result, the key factor that affects their information processing capacity is the **number of timesteps**. Expanding the number of timesteps enables SNNs to capture more features in the temporal dimension, which can improve their accuracy in conversion and training. However, a larger number of timesteps increases latency and computational requirements, resulting in a lower acceleration ratio, resulting in a tradeoff between accuracy and time. Therefore, current efforts to enhance SNN accuracy often involve finding ways to achieve it within the same number of timesteps. In this paper, we propose a novel approach to improve the tradeoff between accuracy and time in SNNs. Specifically, our method allows each input sample to have a varying number of timesteps during inference, increasing the number of timesteps only when the current sample is hard to classify, resulting in an early exit in the time dimension. We refer to this approach as **S**piking **E**arly-**E**xit **N**eural **N**etworks (**SEENNs**). To determine the optimal number of timesteps for each sample, we propose two methods: SEENN-I, which uses confidence score thresholding to output a confident prediction as fast as possible; and SEENN-II, which employs reinforcement learning to find the optimal policy for the number of timesteps. Our results show that SEENNs can be applied to both conversion-based and direct training-based approaches, achieving a new state-of-the-art performance for SNNs. In summary, our contributions are threefold: 1. We introduce a new direction to optimize SNN performance by treating the number of timesteps as a variable conditioned to input samples. 2. We propose Spiking Early-Exit Neural Networks (SEENNs) and use two methods to determine which timestep to exit: confidence score thresholding and reinforcement learning optimized with policy gradients. 3. We evaluate our SEENNs on both conversion-based and training-based models with large-scale datasets like CIFAR and ImageNet. For example, our SEENNs can use \(\sim\)**1.1** timesteps to achieve similar performance with a 6-timestep model on the CIFAR-10 dataset. ## 2 Related Work ### Spiking Neural Networks #### 2.1.1 ANN-SNN Conversion Converting ANNs to SNNs utilizes the knowledge from pre-trained ANNs and replaces the ReLU activation in ANNs with a spike activation in SNNs (Rueckauer et al., 2016, 2017; Sengupta et al., 2019; Han and Roy, 2020). The conversion-based method, therefore, seeks to match the features in two different models. For example, Rueckauer et al. (2016, 2017) studies how to select the firing threshold to cover all the features in an ANN. (Han et al., 2020) studies using a smaller threshold and (Deng and Gu, 2021) proposes to use a bias shift to better match the activation. Based on the error analysis, (Li et al., 2021) utilizes a parameter calibration technique and (Bu et al., 2022) further changes the training scheme in ANN. #### 2.1.2 Direct Training of SNN Direct training from scratch allows SNNs to operate within extremely few timesteps. In recent years, the number of timesteps used to train SNNs has been reduced from more than 100 (Rathi et al., 2019) to less than 5 (Zheng et al., 2020). The major success is based on the spatial-temporal backpropagation (Wu et al., 2018; 2019) and surrogate gradient estimation of the firing function (Dong et al., 2017). Through gradient-based learning, recent works (Fang et al., 2021; Rathi and Roy, 2021; Kim and Panda, 2021; \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ }}}}}}}}}}}}\)\) propose to optimize not only parameters but also firing threshold and leaky factor. Moreover, loss function (Deng et al., 2022), surrogate gradient estimation (Li et al., 2021), batch normalization (Duan et al., 2022), activation distribution (Guo et al., 2022) are also factors that affect the learning behavior in direct training and have been investigated properly. Our method, instead, focuses on the time dimension, which is fully complementary to both conversion and training. ### Conditional Computing Conditional computing models can boost the representation power by adapting the model architectures, parameters, or activations to different input samples (Han et al., 2021). BranchyNet (Teerapittayanon et al., 2016) and Conditional Deep Learning (Panda et al., 2016) add multiple classifiers in different layers to apply spatial early exit to ANNs. SkipNet (Wang et al., 2018) and BlockDrop (Wu et al., 2018) use a dynamic computation graph by skipping different blocks based on different samples. CondConv (Yang et al., 2019) and Dynamic Convolution (Han et al., 2021) use the attention mechanism to change the weight in convolutional layers according to input features. Squeeze-and-Excitation Networks (Hu et al., 2018) proposes to reweight different activation channels based on the global context of the input samples. To the best of our knowledge, our work is the first to incorporate conditional computing into SNNs. ## 3 Methodology In this section, we describe our overall methodology and algorithm details. We start by introducing background on fixed-timestep spiking neural networks and then, based on a strong rationale, we introduce our SEENN method. ### Spiking Neural Networks SNNs simulate biological neurons with Leaky Integrate-and-Fire (LIF) layers (Burkitt, 2006). In each timestep, the input current in the \(\ell\)-th layer charges the membrane potential \(\mathbf{u}\) in the LIF neurons. When the membrane potential exceeds a threshold, a spike \(\mathbf{s}\) will fire to the next layer, as given by: \[\mathbf{u}^{\ell}[t+1] =\tau\mathbf{u}^{\ell}[t]+\mathbf{W}^{\ell}\mathbf{s}^{\ell-1}[t], \tag{1}\] \[\mathbf{s}^{\ell}[t+1] =H(\mathbf{u}^{\ell}[t+1]-V), \tag{2}\] where \(\tau\in(0,1]\) is the leaky factor, mimicking natural potential decay. \(H(\cdot)\) is the Heaviside step function and \(V\) is the firing threshold. If a spike fires, the membrane potential will be reset to 0: \((\mathbf{u}[t+1]=\mathbf{u}[t+1]*(1-\mathbf{s}[t+1]))\). Following existing baselines, in direct training, we use \(\tau=0.5\) while in conversion we use \(\tau=1.0\), transforming to the Integrate-and-Fire (IF) model. Moreover, the reset in conversion is done by subtraction: \((\mathbf{u}[t+1]=\mathbf{u}[t+1]-V_{th}\mathbf{s}[t+1])\) as suggested by (Han and Roy, 2020). Now, denote the overall spiking neural network as a function \(f_{T}(\mathbf{x})\), its forward propagation can be formulated as \[f_{T}(\mathbf{x})=\frac{1}{T}\sum_{t=1}^{T}h\circ g^{L}\circ g^{L-1}\circ g^{L -2}\circ\cdots g^{1}(\mathbf{x}), \tag{3}\] where \(h(\cdot)\) denotes the final linear classifier, and \(g^{t}(\cdot)\) denotes the \(\ell\)-th block of backbone networks. \(L\) represents the total number of blocks in the network. A block contains a convolutional layer to compute input current (\(\mathbf{W}^{\ell}\mathbf{s}^{\ell-1}\)), a normalization layer (Zheng et al., 2020), and a LIF layer. In this work, we use a direct encoding method, \(i.e.\), using \(g^{1}(\mathbf{x})\) to encode the input tensor into spike trains, as done in recent SNN works (Fang et al., 2021; Deng et al., 2022; Kim et al., 2022). In SNNs, we repeat the inference process for \(T\) times and average the output from the classifier to produce the final result. ### Introducing Early-Exit to SNNs Conventionally, the number of timesteps \(T\) is set as a fixed hyper-parameter, causing a tradeoff between accuracy and efficiency. Here, we also provide several \(accuracy-T\) curves of spiking ResNets in Fig. 1. These models are trained from scratch or converted to 4, 6, or 8 timesteps and we evaluate the performance with the available numbers of timesteps. For the CIFAR-10 dataset, increasing the number of timesteps from 2 to 6 only brings 0.34% top-1 accuracy gain, at the cost of 300% more latency. _In other words, the majority of correct predictions can be inferred with much fewer timesteps._ The observation in Fig. 1 motivates us to explore a more fine-grained adjustment of \(T\). We are interested in an SNN that can adjust \(T\) based on the characteristics of different input images. Hypothetically, each image is linked to an _difficulty_ factor, and this SNN can identify this difficulty to decide how many timesteps should be used, thus eliminating unnecessary timesteps for easy images. We refer to such a model as a spiking early-exit neural network (SEENN). To demonstrate the potential of SEENN, we propose a metric that calculates the minimum timesteps needed to perform correct prediction averaged on the test dataset, \(i.e.\), the lowest timesteps we can achieve without comprising the accuracy, and it is based on the following assumption: **Assumption 3.1**.: Given a spiking neural network \(f_{T}\), if it can correctly predict \(\mathbf{x}\) with \(t\) timesteps, then it always outputs correct prediction for any \(t^{\prime}\) such that \(t\leq t^{\prime}\leq T\). This assumption indicates the inclusive property of different timesteps. In Appendix A, we provide empirical evidence to support this assumption. Let \(\mathbb{C}_{t}\) be the set of correct predicted input samples for timestep \(t\), we have \(\mathbb{C}_{1}\subseteq\mathbb{C}_{2}\subseteq\cdots\subseteq\mathbb{C}_{T}\) based on Assumption 3.1. Also denote \(\mathbb{W}=\overline{\mathbb{C}_{T}}\) as the wrong prediction set, we propose the _averaged earliest timestep (AET)_ metric, given by \[\text{AET}=\frac{1}{N}\left(|\mathbb{C}_{1}|+T|\mathbb{W}|+\sum_{t=2}^{T}t(| \mathbb{C}_{t}|-|\mathbb{C}_{t-1}|)\right), \tag{4}\] where \(|\cdot|\) returns the cardinal number of the set, and \(|\mathbb{C}_{t}|-|\mathbb{C}_{t-1}|\) returns the number of samples that belong to \(\mathbb{C}_{t}\) yet not to \(\mathbb{C}_{t-1}\). \(N=|\mathbb{C}_{T}|+|\mathbb{W}|\) is the total number of samples in the validation dataset. The AET metric describes an ideal scenario where correct predictions are always inferred using the minimum number of timesteps required, while still preserving the original accuracy. It's worth noting that incorrect samples are inferred using the maximum number of timesteps, as it is usually not possible to determine if a sample cannot be correctly classified before inference. In Fig. 1, we report the AET in each case. For models directly trained on CIFAR10 or CIFAR100, the AET remains slightly higher than \(2\) (note that the minimum number of timesteps is 2). With merely 11% more latency added to the 2-timestep SNN on CIFAR10 (\(T=2.228\)), we can achieve an accuracy equal to a 6-timestep SNN. The converted SNNs also only need a few extra time steps. This suggests the huge potential for using early exit in SNNs. Despite the potential performance boost from the early exit, it is impossible to achieve the AET effect in practice since we cannot access the label in the test set. Therefore, the question of how to design an efficient and effective predictor that determines the number of timesteps for each input is Figure 1: The accuracy of spiking ResNet under different numbers of timesteps on CIFAR10, CIFAR100, and ImageNet datasets, either by direct training (Deng et al., 2022) or by conversion (Bu et al., 2022). non-trivial. In the following sections, we propose two methods, SEENN-I and SEENN-II, to address this challenge. ### Seenn-I In SEENN-I, we adopt a proxy signal to determine the difficulty of the input sample--confidence score. Formally, let the network prediction probability distribution be \(\mathbf{p}=\mathrm{softmax}(f_{t}(\mathbf{x}))=[p_{1},p_{2},\ldots,p_{M}]\), where \(M\) is the number of object classes, the confidence score (CS) is defined as \[\text{CS}=\mathrm{max}(\mathbf{p}), \tag{5}\] which means the maximum probability in \(\mathbf{p}\). The CS is a signal that measures the level of uncertainty. If the CS is high enough (, CS\(=0.99\)), the prediction distribution will be highly deterministic; otherwise (, CS\(=\frac{1}{M}\)), the prediction distribution is uniform and extremely uncertain. A line of work (Teerapittayanon et al., 2016; Guo et al., 2017) has shown that the level of uncertainty is highly correlated with the accuracy of the neural networks. For input samples with deterministic output prediction, the neural network typically achieves high accuracy, which corresponds to relatively _easy_ samples. Hence, we can utilize this property as an indicator of how many time steps should be assigned to this sample. We adopt a simple thresholding mechanism,, given a preset threshold \(\alpha\), we iterate each timestep and once the confidence score is higher than \(\alpha\), the accumulated output is used for prediction. A diagram describing this method can be found in Fig. 2. ### Seenn-Ii Our SEENN-I is a post-training method that can be applied to an off-the-shelf SNN. It is easy to use, but the SNN is not explicitly trained with the early exit, so the full potential of early exit is not exploited through SEENN-I. In this section, we propose an "early-exit-aware training" method for SNNs called SEENN-II. SEENN-II is built from reinforcement learning to directly predict the difficulty of the image. Specifically, we define an action space \(\mathcal{T}=\{t_{1},t_{2},\cdots,t_{n}\}\), which contains the candidates for the number of timesteps that can be applied to an input sample \(\mathbf{x}\). To determine the optimal timestep candidates, we develop a policy network that generates an \(n\)-dimensional _policy vector_ to sample actions. During training, a reward function is calculated based on the policy and the prediction result, which is generated by running the SNN with the number of timesteps suggested by the policy vector. Unlike traditional reinforcement learning (Sutton and Barto, 2018), our problem does not consider state transition as the policy can predict all actions at once. We also provide a diagram describing the overall method in Fig. 3. Formally, consider an input sample \(\mathbf{x}\) and a policy network \(f_{p}\) with parameter \(\theta\), we define the policy of selecting the timestep candidates as an \(n\)-dim categorical distribution: \[\mathbf{v}=\mathrm{softmax}(f_{p}(\mathbf{x};\theta)), \tag{6}\] \[\pi_{\theta}(\mathbf{z}|\mathbf{x})=\prod_{k=1}^{n}\mathbf{v}_{k}^{\mathbf{z }_{k}}, \tag{7}\] where \(\mathbf{v}\) is the probability of the categorical distribution, obtained by inferring the policy networks with a \(\mathrm{softmax}\) function. Thus \(\mathbf{v}_{k}\) represents the probability of choosing \(t_{k}\) as the number of timesteps in the SNN. An action \(\mathbf{z}\in\{0,1\}^{n}\) is sampled based on the policy \(\mathbf{v}\). Here, \(\mathbf{z}\) is a one-hot vector since only one timestep can be selected. Note that the policy network architecture is made sufficiently small such that the cost of inferring the policy is negligible compared to SNN (see architecture details in Sec. 4.1). Once we obtain an action vector \(\mathbf{z}\), we can evaluate the prediction using the target number of timesteps,, \(f_{t}(\mathbf{x})\). Our objective is to minimize the number of timesteps we used while not sacrificing accuracy. Therefore, we associate the actions taken with the following reward function: \[R(\mathbf{z})=\begin{cases}\frac{1}{2^{t_{k}}|\mathbf{x}_{k}=1}&\text{if correct prediction}\\ -\beta&\text{if incorrect prediction}\end{cases}. \tag{8}\] Here, \(t_{k}|_{\mathbf{z}_{k}=1}\) represents the number of timesteps selected by \(\mathbf{z}\). Here, the reward function is determined by whether the prediction is correct or incorrect. If the prediction is correct, then we incentivize early exit by assigning a larger reward to a policy that uses fewer timesteps. However, if the prediction is wrong, we penalize the reward with \(\beta\), which serves the role to balance the accuracy and the efficiency. As an example, a large \(\beta\) leads to more correct predictions but also more timesteps. **Gradient Calculation in the Policy Network** To this end, our objective for training the policy network is to maximize Figure 2: The overview of SEENN-I, which computes the confidence score to select the optimal number of timesteps. the expected reward function, given by: \[\max_{\theta}\ \mathbb{E}_{\mathbf{z}\sim\pi_{\theta}}[R(\mathbf{z})]. \tag{9}\] In order to calculate the gradient of the above objective, we utilize the policy gradient method (Sutton and Barto, 2018) to compute the derivative of the reward function w.r.t. \(\theta\), given by \[\nabla_{\theta}\mathbb{E}[R(\mathbf{z})] =\mathbb{E}[R(\mathbf{z})\nabla_{\theta}\log\pi_{\theta}(\mathbf{ z}|\mathbf{x})]\] \[=\mathbb{E}[R(\mathbf{z})\nabla_{\theta}\log\prod_{k=1}^{n} \mathbf{v}_{k}^{\mathbf{z}_{k}}]\] \[=\mathbb{E}[R(\mathbf{z})\nabla_{\theta}\sum_{k=1}^{n}\mathbf{z} _{k}\log\mathbf{v}_{k}]. \tag{10}\] Moreover, unlike other reinforcement learning which relies on Monte-Carlo sampling to compute the expectation, our method can compute the exact expectation. During the forward propagation of the SNN, we can store the intermediate accumulated output at each \(t_{k}\), and calculate the reward function using the stored accumulated output \(f_{t_{k}}(\mathbf{x})\). Since \(\pi_{\theta}(\mathbf{z}|\mathbf{x})\)is a categorical distribution, we can rewrite Eq. (10) as \[\nabla_{\theta}\mathbb{E}[R(\mathbf{z})]=\sum_{k=1}^{n}R(\mathbf{z}|_{\mathbf{ z}_{k}=1})\mathbf{v}_{k}\nabla_{\theta}\log\mathbf{v}_{k}, \tag{11}\] where \(R(\mathbf{z}|_{\mathbf{z}_{k}=1})\) is the reward function evaluated with the output prediction using \(t_{k}\) timesteps. ### Training SEENN In this section, we describe the training methods for our SEENN. To train the model for SEENN-I, we explicitly add a cross-entropy function to each timestep, thus making the prediction in early timesteps better, given by \[\min_{\mathbf{W}}\frac{1}{n}\sum_{k=1}^{n}L_{CE}(f_{t_{k}}(\mathbf{x}), \mathbf{W},\mathbf{y}), \tag{12}\] where \(L_{CE}\) denotes the cross-entropy loss function and \(\mathbf{y}\) is the label vector. This training objective is essentially Temporal Efficient Training (TET) loss proposed in Deng et al. (2022). We find this function can enhance the performance from every \(t_{k}\) compared to \(L_{CE}\) applied to only the maximum number of timesteps. As for conversion-based SEENN-I, we do not modify any training function. Instead, we directly apply the confidence score thresholding to the converted model. To train the SEENN-II, we first employ TET loss to train the model for several epochs without involving the policy network. This can avoid the low training accuracy in the early stage of the training which may damage the optimization of the policy network. Then, we jointly optimize the SNN and the policy network by \[\min_{\mathbf{W},\theta}\mathbb{E}_{\mathbf{z}\sim\pi_{\theta}}[-R(\mathbf{z })+L_{CE}(f_{t_{k}|_{\mathbf{z}_{k}=1}}(\mathbf{x}),\mathbf{W},\mathbf{y})]. \tag{13}\] Note that we do not train SEENN-II for the converted model since in conversion, a pre-trained ANN is used to obtain the converted SNN and does not warrant any training. ## 4 Experiments To demonstrate the efficacy and the efficiency of our SEENN, we conduct experiments on popular image recognition datasets, including CIFAR10, CIFAR100 (Krizhevsky et al., 2010), ImageNet (Deng et al., 2009), and an event-stream dataset CIFAR10-DVS (Li et al., 2017). Moreover, to show the compatibility of our method, we compare SEENN with both training-based and conversion-based state-of-the-art methods. Finally, we also provide some hardware and qualitative evaluation on SEENN. ### Implementation Details In order to implement SEENN-I, we adopt the code framework of TET (Deng et al., 2022) and QCFS (Bu et al., 2022), both of which provide open-source implementation. All models are trained with a stochastic gradient descent optimizer with a momentum of 0.9 for 300 epochs. The learning rate is 0.1 and decayed following a cosine annealing schedule (Loshchilov and Hutter, 2016). The weight decay is set to \(5e-4\). For ANN pre-training in QCFS, we set the step \(l\) to 4. We use Cutout (DeVries and Taylor, 2017) and AutoAugment (Cubuk et al., 2019) for better accuracy as adopted in (Li et al., 2021; Duan et al., 2022). To implement SEENN-II, we first take the checkpoint of pre-trained SNN in SEENN-I and initialize the policy network. Then, we jointly train the policy network as well as the SNN. The policy network we take for the CIFAR dataset is ResNet-8, which contains only 0.547% computation of ResNet-19. We also provide the details of architecture in Appendix B and measure the latency/energy of the policy Figure 3: The overview of SEENN-II, which computes the confidence score to select the optimal number of timesteps. network in Sec. 4.3. For the ImageNet dataset, we downsample the image resolution to 112\(\times\)112 for the policy network. Both the policy network and the SNN are finetuned for 75 epochs using a learning rate of 0.01. Other training hyper-parameters are kept the same as SEENN-I. ### Comparison to SOTA work CifarWe first provide the results on the CIFAR-10 and CIFAR-100 datasets (Krizhevsky et al., 2010). We test the architectures in the ResNet family (He et al., 2016). We summarize both the direct training comparison as well as the ANN-SNN conversion comparison in Table 1 and Table 2. Since the \(T\) in our SEENN can be variable based on the input, we report the average number of timesteps in the test dataset. As we can see from Table 1, existing direct training methods usually use 2, 4, and 6 timesteps in inference. However, increasing the number of timesteps from 2 to 6 brings marginal improvement in accuracy, for example, 95.45% to 95.60% in TEBN (Duan et al., 2022). Our SEENN-I can achieve **96.07**% with only **1.09** average timesteps, which is **5.5\(\times\)** lower than the state of the art. SEENN-II gets a similar performance with SEENN-I in the case of CIFAR-10, but it can achieve 0.7% higher accuracy than SEENN-I on the CIFAR-100 dataset as shown in Table 2. We also include the results of ANN-SNN conversion using SEENN-I. Here, the best existing work is QCFS (Bu et al., 2022b), which can convert the SNN in lower than 8 timesteps. We directly run SEENN-I with different choices of confidence score threshold \(\alpha\). Surprisingly, on the CIFAR-10 dataset, our SEENN-I can convert the model with **1.4** timesteps, and get **93.63**% accuracy. Instead, the QCFS only gets 75.44% accuracy using 2 timesteps for all input images. By selecting a higher threshold, we can obtain **95.08**% accuracy with **2.01** timesteps, which uses **4\(\times\)** lower number of timesteps than QCFS under the same accuracy level. ImageNetWe compare the direct training and conversion methods on the ImageNet dataset. The results are \begin{table} \begin{tabular}{l l c c} \hline **Model** & **Method** & **T** & **Acc.** \\ \hline \multicolumn{4}{c}{_Direct Training of SNNs_} \\ \hline \multirow{3}{*}{ResNet-19} & \multirow{3}{*}{tdBN (Zheng et al., 2020)} & 2 & 92.34 \\ & & 4 & 92.92 \\ & & 6 & 93.16 \\ \hline \multirow{3}{*}{ResNet-20} & \multirow{3}{*}{Dspike (Li et al., 2021b)} & 2 & 93.13 \\ & & 4 & 93.66 \\ & & 6 & 94.25 \\ \hline \multirow{6}{*}{ResNet-19} & \multirow{3}{*}{TET (Deng et al., 2022)} & 2 & 94.16 \\ & & 4 & 94.44 \\ & & 6 & 94.50 \\ \cline{2-4} & & 2 & 95.45 \\ \cline{2-4} & TEBN (Duan et al., 2022) & 4 & 95.58 \\ & & 6 & 95.60 \\ \cline{2-4} & & **1.09** & **96.07** \\ & & **1.20** & **96.38** \\ \cline{2-4} & **SEENN-II (Ours)** & **1.08** & **96.01** \\ \hline \multicolumn{4}{c}{_ANN-SNN Conversion_} \\ \hline \multirow{3}{*}{ResNet-20} & \multirow{3}{*}{Opt (Deng \& Gu, 2021)} & 16 & 92.41 \\ & & 32 & 93.30 \\ & Calibration (Li et al., 2021a) & 32 & 94.78 \\ \hline \multirow{6}{*}{ResNet-18} & \multirow{3}{*}{QCFS (Bu et al., 2022b)} & 8 & 75.44 \\ & & 16 & 90.43 \\ \cline{1-1} & & 32 & 94.82 \\ \cline{1-1} \cline{2-4} & & 2 & 75.44 \\ \cline{1-1} & & 4 & 90.43 \\ \cline{1-1} & & 8 & 94.82 \\ \cline{1-1} \cline{2-4} & & **1.40** & **93.63** \\ \cline{1-1} & & **2.01** & **95.08** \\ \hline \end{tabular} \end{table} Table 1: Accuracy-\(T\) comparison on CIFAR-10 dataset. \begin{table} \begin{tabular}{l l c c} \hline **Model** & **Method** & **T** & **Acc.** \\ \hline \multicolumn{4}{c}{_Direct Training of SNNs_} \\ \hline \multirow{3}{*}{ResNet-20} & \multirow{3}{*}{Dspike (Li et al., 2021b)} & 2 & 71.68 \\ & & 4 & 73.35 \\ & & 6 & 74.24 \\ \hline \multirow{6}{*}{ResNet-19} & \multirow{3}{*}{TET (Deng et al., 2022)} & 2 & 72.87 \\ & & 4 & 74.47 \\ & & 6 & 74.72 \\ \cline{2-4} & & 2 & 78.07 \\ \cline{2-4} & TEBN (Duan et al., 2022) & 4 & 78.71 \\ & & 6 & 78.76 \\ \cline{2-4} & & **1.19** & **79.56** \\ & & **1.55** & **81.42** \\ \cline{2-4} & **SEENN-II (Ours)** & **1.21** & **80.23** \\ \cline{2-4} & & _ANN-SNN Conversion_ & \\ \hline \multirow{3}{*}{ResNet-20} & \multirow{3}{*}{Opt (Deng \& Gu, 2021)} & 16 & 63.73 \\ & & 32 & 68.40 \\ \cline{1-1} & & 32 & 75.53 \\ \hline \multirow{6}{*}{ResNet-20} & \multirow{3}{*}{OPI (Bu et al., 2022a)} & 8 & 23.09 \\ & & 16 & 52.34 \\ \cline{1-1} & & 32 & 67.18 \\ \cline{1-1} \cline{2-4} & & 2 & 19.96 \\ \cline{1-1} & QCFS (Bu et al., 2022b) & 4 & 34.14 \\ \cline{1-1} & & 8 & 55.37 \\ \cline{1-1} \cline{2-4} & **SEENN-I (Ours)** & **2.57** & **39.33** \\ \cline{1-1} & & **4.41** & **56.99** \\ \hline \end{tabular} \end{table} Table 2: Accuracy-\(T\) comparison on CIFAR-100 dataset. sorted Table 3. For direct training, we use two baseline networks: vanilla ResNet-34 (He et al., 2016) and SEW-ResNet-34 (Fang et al., 2021). For SEENN-I, we directly use the pre-trained checkpoint from TET (Deng et al., 2022). It can be observed that our SEENN-I can achieve the same accuracy using only 50% of original number of timesteps. For example, SEENN-I ResNet-34 reaches the maximum accuracy in **2.35** timesteps. SEENN-II can obtain a similar performance by using a smaller \(T\), _i.e_., 1.79. For conversion experiments, we again compare against QCFS using ResNet-34. Our SEENN-I obtains **70.2%** accuracy with **23.5** timesteps, higher than a 32-timestep QCFS model. **CIFAR10-DVS** Here, we compare our SEENN-I on an event-stream dataset, CIFAR10-DVS (Li et al., 2017). Following existing baselines, we train the SNN with a fixed number of 10 timesteps. From Table 4 we can find that SEENN-I can surpass all existing work except TET with only **2.53** timesteps, amounting to nearly **4\(\times\)** faster inference. Moreover, SEENN-II can get an accuracy of **82.6** using 4.5 timesteps. ### Hardware Efficiency In this section, we analyze the hardware efficiency of our method, including latency and energy consumption. Due to the sequential processing nature of SNNs, the latency is generally proportional to \(T\) on most hardware devices. Therefore, we directly use a GPU (NVIDIA Tesla V100) to evaluate the latency (or throughput) of SEENN. For energy estimation, we follow a rough measure to count only the energy of operations that is adopted in previous work (Rathi and Roy, 2021; Li et al., 2021; Bu et al., 2022), as SNNs are usually deployed on memory-cheap devices. Fig. 4 plots the comparison of inference throughput and energy. It can be found that our SEENN-I simultaneously improves inference speed while reducing energy costs. Meanwhile, the policy network in SEENN-II only brings marginal effect and does not impact the overall inference speed and energy cost, demonstrating the efficiency of our proposed method. ### Ablation Study In this section, we conduct the ablation study on our SEENN. In particular, we allow SEENN to flexibly change their balance between the accuracy and the number of timesteps. For example, SEENN-I can adjust the confidence score threshold \(\alpha\) and SEENN-II can adjust the penalty value \(\beta\) defined in the reward function. To demonstrate the impact of differ \begin{table} \begin{tabular}{l l c c} \hline \hline **Model** & **Method** & **T** & **Acc.** \\ \hline \multicolumn{4}{c}{_Direct Training of SNNs_} \\ \hline ResNet-19 & tdBN (Zheng et al., 2020) & 10 & 67.8 \\ VGGSNN & PLIF (Fang et al., 2021) & 20 & 74.8 \\ ResNet-18 & Dspike (Li et al., 2021) & 10 & 75.4 \\ ResNet-19 & RecDis-SNN (Guo et al., 2022) & 10 & 72.4 \\ VGGSNN & TET (Deng et al., 2022) & 10 & **83.1** \\ \hline \multirow{4}{*}{VGG-SNN} & **SEENN-I (Ours)** & **2.53** & **77.6** \\ & **5.17** & **82.7** \\ \cline{2-4} & **SEENN-II (Ours)** & **4.49** & **82.6** \\ \hline \hline \end{tabular} \end{table} Table 4: Accuracy-\(T\) comparison on CIFAR10-DVS dataset. Figure 4: Comparison of latency (inference throughput) and energy consumption between SNN and SEENN. \begin{table} \begin{tabular}{l l c c} \hline \hline **Model** & **Method** & **T** & **Acc.** \\ \hline \multicolumn{4}{c}{_Direct Training of SNNs_} \\ \hline ResNet-34 & tdBN (Zheng et al., 2020) & 6 & 63.72 \\ & TET (Deng et al., 2022) & 6 & **64.79** \\ ResNet-34 & TEBN (Duan et al., 2022) & 6 & 64.29 \\ \cline{2-4} & **SEENN-I (Ours)** & **2.28** & **63.65** \\ & **3.38** & **64.66** \\ \cline{2-4} & **SEENN-II (Ours)** & **2.40** & **64.18** \\ \hline \multirow{4}{*}{SEW-ResNet-34} & SEW (Fang et al., 2021) & 4 & 67.04 \\ & TET (Deng et al., 2022) & 4 & 68.00 \\ \cline{1-1} & TEBN (Duan et al., 2022) & 4 & **68.28** \\ \cline{1-1} & **SEENN-I (Ours)** & **1.66** & **66.21** \\ & **2.35** & **67.99** \\ \cline{1-1} \cline{2-4} & **SEENN-II (Ours)** & **1.79** & **67.48** \\ \hline \multicolumn{4}{c}{_ANN-SNN Conversion_} \\ \hline ResNet-34 & Opt (Deng and Gu, 2021) & 32 & 33.01 \\ & 64 & 59.52 \\ \cline{1-1} \cline{2-4} & Calibration (Li et al., 2021) & 32 & 64.54 \\ & 64 & 71.12 \\ \cline{1-1} \cline{2-4} & & 16 & 59.35 \\ QCFS (Bu et al., 2022) & 32 & 69.37 \\ & 64 & 72.35 \\ \cline{1-1} \cline{2-4} & **SEENN-I (Ours)** & **23.47** & **70.18** \\ & **29.53** & **71.84** \\ \hline \hline \end{tabular} \end{table} Table 3: Accuracy-\(T\) comparison on ImageNet dataset. ent hyper-parameters selection, we utilize SEENN-I evaluated with different \(\alpha\). Fig. 5 shows the comparison. The yellow line denotes the trained SNN with a fixed number of timesteps while the blue line denotes the corresponding SEENN-I with 6 different thresholds. We test the SEW-ResNet-34 on the ImageNet dataset and the ResNet-19 on the CIFAR10 dataset. For both datasets, our SEENN-I has a higher \(accuracy-T\) curve than the vanilla SNN, which confirms that **our SEENN improves the accuracy-efficiency tradeoff**. Moreover, our SEENN-I largely reduces the distance between the AET coordinate and the \(accuracy-T\) curve, meaning that our method is approaching the upper limit of the early exit. On the right side of the Fig. 5, we additionally draw the composition pie charts of SEENN-I, which shows how many percentages of inputs are using 1, 2, 3, or 4 timesteps, respectively. It can be shown that, as we gradually adjust \(\alpha\) from 0.4 to 0.9 for the SEW-ResNet-34, the percentage of inputs using 1 timestep decreases (73.4% to 30.7%). For the CIFAR10 dataset, we find images have a higher priority in using the first timestep, ranging from 95.4% to 56.2%. ### Qualitative Assessment In this section, we conduct a qualitative assessment of SEENN by visualizing the input images that are separated by our SEENN-II, or the policy network. Specifically, we take the policy network and let it output the number of timesteps for each image in the ImageNet validation dataset. In principle, the policy network can differentiate whether the image is _easy_ or _hard_ so that easy images can be inferred with less number of timesteps and hard images can be inferred with more number of timesteps. Fig. 6 provides some examples of this experiment, where images are chosen from orange, cucumber, bubble, broccoli, aircraft carrier, and torch classes in the ImageNet validation dataset. We can find that \(T=1\) (easy) images and \(T=4\) (hard) images have huge visual discrepancies. As an example, the orange, cucumber, and broccoli images from \(T=1\) row are indeed easier to be identified, as they contain single objects in a clean background. However, in the case of \(T=4\), there are many irrelevant objects overlapped with the target object, or there could be many small samples of target objects which makes it harder to identify. For instance, in the cucumber case, there are other vegetables that increase the difficulty of identifying them as cucumbers. These results confirm our hypothesis that visually simpler images are indeed easier and can be correctly predicted using a fewer number of timesteps. ## 5 Conclusion In this paper, we introduce SEENN, a novel attempt to allow a varying number of timesteps on an input-dependent basis. Our SEENN includes both a post-training approach (confidence score thresholding) and an early-exit-aware training approach (reinforcement learning for selecting the appropriate number of timesteps). Our experimental results show that SEENN is able to find a sweet spot that maintains accuracy while improving efficiency. Moreover, we show that the number of timesteps selected by SEENNs is related to Figure 5: Comparison between SEENN-I and SNN. _Left:_ Accuracy vs. the number of timesteps curve. _Right:_ Pie charts indicating the composition of input images using different numbers of timesteps for inference. Figure 6: Qualitative assessment using input images from ImageNet dataset. We select images from _orange, cucumber, bubble, broccoli, aircraft carrier, and torch_ classes and separate them according to SEENN. the visual difficulty of the image. By taking an input-by-input approach during inference, SEENN is able to achieve state-of-the-art accuracy with less computational resources.
2307.16208
Around the GLOBE: Numerical Aggregation Question-Answering on Heterogeneous Genealogical Knowledge Graphs with Deep Neural Networks
One of the key AI tools for textual corpora exploration is natural language question-answering (QA). Unlike keyword-based search engines, QA algorithms receive and process natural language questions and produce precise answers to these questions, rather than long lists of documents that need to be manually scanned by the users. State-of-the-art QA algorithms based on DNNs were successfully employed in various domains. However, QA in the genealogical domain is still underexplored, while researchers in this field (and other fields in humanities and social sciences) can highly benefit from the ability to ask questions in natural language, receive concrete answers and gain insights hidden within large corpora. While some research has been recently conducted for factual QA in the genealogical domain, to the best of our knowledge, there is no previous research on the more challenging task of numerical aggregation QA (i.e., answering questions combining aggregation functions, e.g., count, average, max). Numerical aggregation QA is critical for distant reading and analysis for researchers (and the general public) interested in investigating cultural heritage domains. Therefore, in this study, we present a new end-to-end methodology for numerical aggregation QA for genealogical trees that includes: 1) an automatic method for training dataset generation; 2) a transformer-based table selection method, and 3) an optimized transformer-based numerical aggregation QA model. The findings indicate that the proposed architecture, GLOBE, outperforms the state-of-the-art models and pipelines by achieving 87% accuracy for this task compared to only 21% by current state-of-the-art models. This study may have practical implications for genealogical information centers and museums, making genealogical data research easy and scalable for experts as well as the general public.
Omri Suissa, Maayan Zhitomirsky-Geffet, Avshalom Elmalech
2023-07-30T12:09:00Z
http://arxiv.org/abs/2307.16208v1
Around the GLOBE: Numerical Aggregation Question-Answering on Heterogeneous Genealogical Knowledge Graphs with Deep Neural Networks ###### Abstract One of the key AI tools for textual corpora exploration is natural language question-answering (QA). Unlike keyword-based search engines, QA algorithms receive and process natural language questions and produce precise answers to these questions, rather than long lists of documents that need to be manually scanned by the users. State-of-the-art QA algorithms based on DNNs were successfully employed in various domains. However, QA in the genealogical domain is still underexplored, while researchers in this field (and other fields in humanities and social sciences) can highly benefit from the ability to ask questions in natural language, receive concrete answers and gain insights hidden within large corpora. While some research has been recently conducted for factual QA in the genealogical domain, to the best of our knowledge, there is no previous research on the more challenging task of numerical aggregation QA (i.e., answering questions combining aggregation functions, e.g., count, average, max). Numerical aggregation QA is critical for distant reading and analysis for researchers (and the general public) interested in investigating cultural heritage domains. Therefore, in this study, we present a new end-to-end methodology for numerical aggregation QA for genealogical trees that includes: 1) an automatic method for training dataset generation; 2) a transformer-based table selection method, and 3) an optimized transformer-based numerical aggregation QA model. The findings indicate that the proposed architecture, GLOBE, outperforms the state-of-the-art models and pipelines by achieving 87% accuracy for this task compared to only 21% by current state-of-the-art models. This study may have practical implications for genealogical information centers and museums, making genealogical data research easy and scalable for experts as well as the general public. numerical aggregation question answering, transformers, deep neural networks, genealogical domain, data modeling, knowledge graph, GEDCOM, cultural heritage research, digital humanities. ## 1 Introduction In the past two decades, there has been an increasing interest in the construction and investigation of genealogical databases. For example, commercial companies like My Heritage and Ancestry collect over 48 millioni and 100 millionii genealogical family trees, respectively; FamilySearch hosts over a billioniii unique individuals in the most significant non-profit collection of family trees. Family trees can be generated using different data sources, such as user-generated content ("personal heritage") [7], biographical registers [46], DNA records and clinical reports [14, 64], and even harvested from books [23, 97]. Online search services built upon genealogical data provide users with rich information about individual members and their genealogy relationships. In addition, these databases can be useful for population and migration research [56], historical preservation [34], and even for medical usage [81, 82]. Natural language search is a widespread practice that enables scholars (and the general public) to explore cultural heritage corpora [27]. However, in the genealogy domain (and other similar domains), database investigation based on search has some well-known limitations, as users must decompose their questions into keywords and then manually scan the obtained results to retrieve the specific information of their interest [26]. Moreover, when distant reading is required, a researcher must collect data from a long list of search results and perform the required calculations manually. Therefore, QA algorithms have been developed to enhance search systems. These algorithms receive questions in natural (human) language and return precise answers to these questions. Deep neural networks (DNNs) trained on large datasets are currently the state-of-the-art method for the QA task. These DNN-based QA systems allow humanities and social sciences researchers (and the general public) with no mathematical or programming background to ask research questions on cultural heritage data in natural language and receive precise answers to these questions. There are seven types of questions mentioned in previous research: 1) Factual questions (what, when, which, who, how) that refer to a single answer (e.g., Who was the first person to classify birds?), 2) Numerical reasoning / numerical aggregative / arithmetic questions are factual questions that require a numerical calculation (e.g., What is the average number of birds traveling from Mexico to the United States every summer?), 3) List questions are factual questions that refer to a list of answers (e.g., Which birds types have blue wings?), 4) Definition questions that refer to a summarization of a topic (e.g., What is a bird?), 5) Hypothetical questions that require information associated with any assumed event (e.g., What would happen if birds had feet?), 6) Causal questions (how or why) that seek an explanation, reason, or elaboration for specific events or objects (e.g., Why birds fly?), 7) Confirmation questions that seek a confirmation (yes/no) of a specific fact (e.g., Can birds fly?) [54, 35, 12, 48, 41, 50, 21]. From an automatic QA perspective, factual natural questions constitute the most researched type of questions that have been extensively studied in the literature with high-accuracy results [1, 11, 60, 73, 77, 78, 81]. Factual questions concentrate on a specific item [34]. For instance, given a question (related to a particular person): "Where was John Doe's father born?", the answer is: "Kenya" (a specific attribute of a person). Training DNN models for factual QA is done using a dataset that comprises triples of the form: (question, answer, corresponding text passage), from which the answer can be extracted. Unlike factual natural questions, numerical aggregation questions, the focus of this paper, pertain to a group of items in the dataset and require applying mathematical computation (an aggregation function) to the matching items. For example, for the question "What is the average age of men in John Doe's family that were born from 1790 to 1860?", a numerical aggregation QA system should return the answer "61.5" by calculating the average value of the "age" attribute of all the men born between 1790 and 1860. Answering these types of questions is critical for conducting distant reading analysis of corpora and allows researchers to answer questions that otherwise require too much human effort and time. Training DNN models for numerical aggregation QA requires a golden standard dataset that comprises questions, answers, and table/s of data from which these answers can be computed, where the answer is the result of an aggregation function (e.g., average, sum, count, min, max) on some of the table's rows. Such a table-based approach can also be applied to factual QA [27, 31, 32, 82, 38]. However, aggregation natural QA is considered a more challenging task, since DNN models for answering numerical aggregation questions do not only have to select the relevant cells from the table/s (as factual QA models do), but also need to learn and compute the aggregation functions that are implied from the question (i.e., perform numerical reasoning). For example, for the question "How many people in my family lived in England?", the model needs to _count_ the number of rows matching a criteria (e.g. "lived in England"), and for the question "What is the life expectancy of men in my family tree?", the model needs to calculate the _average_ value of a specific cell (i.e., age) in the rows matching a criteria (e.g. "gender is male"). The genealogical domain poses several additional challenges for QA DNNs, as there are no available training datasets, and some adaptations of the existing models are needed since genealogical data is usually stored as a GEDCOM (GEnealogical Data COMmunication) graph, rather than a set of texts or tables as expected by the existing QA models; the data is comprised of numerous linked entities (data on persons and families and their multi-level inter-relationships) which overall size may exceed standard DNN's input limitation; the questions are mostly on the relationships rather than just about entities (i.e., persons) in the graph, and require additional aggregation function types that have not been implemented in the existing models. Hence, the main objective of this paper is to design and evaluate a novel methodology for the DNN-based numerical aggregation QA task in the genealogical domain. Cultural heritage corpora are highly suitable for using different natural language processing (NLP) algorithms to support research. The vast amount of texts, images, and other data types in these corpora can be analyzed and used to extract insights and answer many research questions [68]. For example, [10] researched visual QA on cultural heritage corpora when a user provides an image and asks a question on that image [10]; [65] researched a QA chatbot for answering cultural heritage questions for tourists; and [7] used question generation and answering to create a "self-managed" corpus. While there is research on QA in various cultural heritage domains, including genealogy [69], to the best of our knowledge, this is the first research on numerical aggregation QA (that is critical for quantitative analysis and distant reading of corpora) for cultural heritage and specifically for the genealogical domain. The developed methodology is referred to as GLOBE (Genealogical Legacy Overview with Bert Embeddings) and comprises the following main components: 1) a new automated method for dataset(s) generation for numerical aggregation QA based on the knowledge graph representation of genealogical data, 2) a fine-tuned DNN method for optimal table selection based on SBERT [58]; and 3) a fine-tuned numerical aggregation QA DNN model for the genealogical domain, based on BERT [16]. We experimented with six different tabular data models and evaluated the influence of the dataset structure and quality on the DNN's accuracy. The proposed methodology increases the amount and complexity of genealogical data that the QA DNN model can use and outperforms the state-of-the-art aggregation QA model when applied to the genealogical datasets, thus showing the benefit of the domain-specific approach for the task [69]. ## 2 Related Work This section reviews relevant work in the fields of genealogical data representation, DNN architecture, and numerical aggregation QA using DNN. ### Genealogical data representation Developed in 1984 by The Church of Jesus Christ of Latter-day Saints, the GEDCOM format is the de facto standard for data representation in the genealogical field [24, 30, 39]. The GEDCOM format has a simple linkage-based structure where records containing names, events, places, relationships, and dates are arranged hierarchically [24]. Figure 1: Family tree structure [69]. In GEDCOM, every individual (person) in the family tree is represented as a node containing predefined attributes, such as name, birth date and place, death date and place, burial date and place, occupation, and other details. Every individual is a "spouse" (i.e., a parent) or a "child" within a family node. Figure 1 shows a sub-graph corresponding to a Source Person (SP) whose data is displayed in the GEDCOM file in Figure 2. A number bracketed between _@_ symbols and a class name (INDI - individual, FAM - family) is assigned to every person and family node. The source person is denoted as SP (e.g., _@_l138_@_ INDI Mary Lulu in the GEDCOM file), families as F, and other persons as P. As shown in Figure 2, _@_l138_@_ (i.e., Mary Lulu) was a female, born on 27 MAY 1756 in New Jersey, USA, who died on 7 FEB 1815 in Philadelphia, USA, and was buried a day later in the same place. ### Numerical Aggregation QA using DNNs DNN models have become a standard method for developing natural language QA systems in recent years [43]. Numerical aggregation QA is a relatively new, challenging, and underexplored task. There are two main approaches for this task: answering the question directly using a DNN model [27, 32, 82, 38, 31] or converting the question to a formal language query (e.g., in SQL or SPARQL) and using a formal language parsing engine to calculate the answer (i.e., executing the query) [2, 17, 28, 87, 76]. The former approach has a considerable advantage: answering the question is an approximation task with a straightforward output (i.e., a number), while the feasibility and applicability of the latter approach are still under discussion [59] as formal language queries are a much more complicated output. For example, for the question "what is the portion of women in my family tree that were single over the age of 30?" the first approach produces the output of: _"6.43%"_, while the second approach returns Figure 2: A fragment of the GEDCOM family tree file displayed in Figure 1. the SQL query "_SELECT (COUNT(*) / (SELECT COUNT(*) FROM table_person WHERE gender \(=\) N\(F\)?)) as 'Portion' FROM table_person WHERE gender \(=\) N\(F\)' AND (marriage_year - birth_year) \(>\) 30"_. Moreover, converting a natural language question to a formal language query requires a massive amount of training data (i.e., pairs of natural language questions and corresponding formal queries). Numerical aggregation QA models have achieved varying accuracy in multiple studies based on paragraphs [4; 25; 57] and structured tables represented as text [32; 5]. Several open-domain datasets have been generated to evaluate table-based QA DNN models (for both factual and numerical aggregation questions). For instance, WikiTQ [56] comprises complex questions from Wikipedia tables; SQA [36] is a conversational dataset created by crowdsourcing from the WikiTQ dataset; and WikiSQL [87] is a formal query paraphrasing mapped to a natural text dataset. The models' accuracy depends on the complexity of the dataset and question type (higher accuracy is achieved for factual questions than for numerical aggregation questions), and varies between 33% and 86% (Table 1). \begin{table} \begin{tabular}{l l l l l} \hline Model & WikiSQL & WikiTQ & SQA (Q1) & SQA (AVG) \\ \hline Pasupat and Liang, 2015 & - & 37.1 & 51.4 & 33.2 \\ Iyyer et al., 2017 & - & - & 70.4 & 44.7 \\ Neelakantan et al., 2017 & - & 34.2 & 60.0 & 40.2 \\ Zhang et al., 2017 & - & 43.7 & - & - \\ Liang et al., 2018 & 71.8 & 43.1 & - & - \\ Haug et al., 2018 & - & 34.8 & - & - \\ Liang et al., 2018 & - & 43.1 & - & - \\ Sun et al., 2018 & - & - & 70.3 & 45.6 \\ Agarwal et al., 2019 & 74.9 & 44.1 & - & - \\ Wang et al., 2019 & 79.4 & 44.5 & - & - \\ Min et al., 2019 & 84.4 & - & - & - \\ Dasigi et al., 2019 & - & 43.9 & - & - \\ Muller et al., 2019 & - & - & 67.2 & 55.1 \\ \hline \end{tabular} \end{table} Table 1: Accuracy reported for the state-of-the-art QA models on open-domain datasets. Since DNNs perform slowly in predicting answer spans (in a factual QA task) or selecting cells (in a numerical aggregation QA task) for a given passage of text or table, they are not applied to the entire database, but only to selected pieces of data [1]. Moreover, while some DNN models can accept a large input [8, 40], many state-of-the-art DNN models (such as BERT) tend to accept a limited size input, usually ranging from 128 to 512 tokens (i.e., words) [23] due to computational resource limitations. Therefore, there is a need to develop an optimal data selection method before applying the DNN model. Thus, in the factual QA task, given a user's question, the system retrieves the top K passages in the dataset that are relevant to the question (typically based on a reverse indexing approach [9, 37, 49, 62]). Using the DNN model, the system then predicts the answer spans (start and end positions) for each of the K-selected passages with a certain confidence level. However, when answering a numerical aggregation question, the model must receive all the data to perform the numerical function calculation (e.g., count, average). Hence, earlier works suggested splitting the input into cells, rows or columns to overcome this limitation. [5] proposed classifying rows based on the given question and passing only selected rows to the model. [18] devised a heuristic-based column selection technique. [42] designed a model-based cell selection technique that is differentiable and trained with the main task model. [85] used an attention mask to restrict attention to tokens in the same row and column, and [19] suggested attention heads to reorder tokens based on a row or column. However, these methods limit the aggregation functions that can be computed, as they exclude the cases when the aggregation function is relative to the entire table (e.g., average over the whole table). Alternatively, [31] used a DNN model to infer an answer for each table in the dataset (or top K tables), and the answer with the highest confidence score was selected. In the genealogical domain, there is a relatively small number of entities (i.e., node types, such as a person or family) and a large number of relationships, which, when converted into the tabular data format, may yield very large tables. In addition, aggregative QA in the genealogical data requires joining multiple tables (e.g., to answer the question "How many people become parents before the age of 20?" we need both the person's age and his first child's birthdate). Therefore, the methods mentioned above that perform a single table selection are not applicable in the genealogical domain that requires joining several tables to support aggregation QA. As a result, the generic approaches described above cannot be applied as-is in the genealogical domain without reducing accuracy, but require certain modifications and adaptations. In addition, various training methods, hyperparameters, and architectures indicate the complexity of the task and its sensitivity to a specific domain [70]. ## 3 Methodology The proposed end-to-end pipeline of GLOBE consists of two parallel processes: the interactive inference process (Figure 3) and the model training, which runs in the background (Figure 4). As shown in Figure 3, the interactive inference process, using a simple user interface, allows users to select a family tree (1) and ask a question (2). Then, the system selects the table/s from the dataset that include/s the answer to the posed question (3). At this stage, in order to suit the amount of input data for the DNN's size limit, the user may be required to further reduce the scope of the explored family tree (i.e., a user interface that allows the user to select a relation degreeiv is presented and the user selects a smaller degree) (4). Finally, if the input size is suitable, the system calculates the answer and displays it to the user (5). As shown in Figure 4, the proposed methodology for the model training comprises three main stages: 1) The generation of a table-based training dataset comprised of numerical aggregation questions from genealogical graphs, tables that contain data for answers to these questions and the corresponding answers; 2) Building a DNN table ranker for selecting the suitable table/s that contain/s the data for answering a given question; and 3) Building an optimal numerical aggregation QA DNN model using the best generated dataset and the table selection from the previous stages. Next, we describe each stage of the pipelines and the experimental setup of the study in more detail. Figure 4: The GLOBE training pipeline. Figure 3: The GLOBE inference pipeline. ### GenAgg Dataset Generation Generating a training dataset from genealogical data is a two-step process. As shown in Figure 5, the method includes the following steps: (1) generating relational tables from the family tree knowledge graphs, and (2) generating questions and answers from the tables. The result of the process is a GenAgg dataset, a relational database composed of questions, tables with answers to these questions and answers tailored to the genealogical domain. #### 3.1.1 GedCOM to Tables GEDCOM graphs are first converted into CIDOC-CRM-based' formal knowledge graphs, as shown in [69]. These knowledge graphs are then represented as a relational database (GenAgg). While there are several alternatives for modeling a dataset (e.g., graph database [93, 94, 95], RDF triples, RDBMS), this paper uses a table-based approach (RDBMS) to be compatible with the state-of-the-art DNN model for aggregative question-answering (TaPas [32]). To determine the optimal database design (i.e., structure) for the task of numerical aggregation QA, we experimented with six alternative types of table structure (see Figures 6-11). Figure 5: The GenAgg dataset generation process. A single table structure (GenAgg\({}_{\text{it}}\)) is easy to implement and use, as many DNN models have been built to deal with this structure type [32; 38]. However, this design supports only persons with a single spouse, and requires the DNN model to infer some relationships (e.g., siblings, children) while answering questions. Figure 7 presents the "raw-data" design (GenAgg\({}_{\text{raw}}\)). It contains two tables: the table of the persons (i.e., nodes in the graph and their attributes) and the table of kinship (i.e., pairs of persons and their relationships, edges in the graph). This design overcomes the limitation of spouses present in the single-table design; however, it lacks marriage attribute data and allows only Parent, Spouse, or Sibling relationships. Figure 6: A single-table design of the database. The third design, "relationship-driven" (GenAgggi), splits each relationship type (i.e., edge type) into a dedicated table with relevant attributes, thus reducing the model complexity when inferring relationships (Figure 8). An unnormalized, pre-joined dataset can be created to enable more straightforward relationship inference. As shown in Figure 9, such an "aggregative" design (GenAgggi) duplicates each attribute and stores it in both persons' and relationships' tables. In addition, relationship counts (e.g., number of children) and the age of persons and relationships (e.g., when the first child was born) are pre-calculated and added to the tables to reduce the model's need to count each person's relationships. Figure 8: Relationship-driven design. Figure 7: Raw-data design. As shown in Figure 10, the "aggregative" design can be extended with an additional aggregation table for events (birth, death, and marriages) per year and place (GenAggevent). These aggregations may simplify the inference for time and place-related questions. Figure 10: The aggregation table for events. Figure 9: The "aggregative" design. Finally, to further reduce the data size of the DNN input, a 6NF-based [15] (sixth degree of normalization - GenAgggsNF) design was proposed where each non-primary key column in the "aggregative with events" design (Figures 9 and 10) will be moved into a separate table (Figure 11), thus creating a large number of tables with a small number (2-3) of columns. #### 3.1.2 Tables to Questions and Answers The next step in generating the training dataset is to compose questions and answers corresponding to the tables constructed above. This is done for each dataset design in two steps: first, the initial set of questions is compiled using a pattern-based approach, and then a DNN model is applied for rephrasing the questions to expand the initial question set. The patterns are manually created based on common research questions in the genealogical domain. To this end, each attribute has to be classified as a textual or numeric column that fits specific patterns. Each attribute can be used with certain predefined types of aggregation functions and conditions, and some attributes can serve as population descriptors (e.g., gender, occupation). For example, the column _age_ is a numeric attribute; thus, it can be used with _min_, _max_, and _average_ numerical aggregation functions, but it does not make sense with _sum_, and does not describe a type of population. Table 2 presents several examples of such patterns and the resulting questions. The initial set of questions can be systematically created with all the possible combinations of attributes and conditions using Algorithm 1. The algorithm iterates over the tables of the dataset (line 1), over the columns of each table (line 2), over pattern-formatting rules (as shown in Table 2) relevant to each column (line 3), and over the genders (i.e., male, female, both) (line 4). With the iteration's data, Algorithm 1 generates N conditions from other columns in the same table (lines 5-9); and sends the iteration's data and the conditions to the pattern-formatting rule (i.e., a function) that returns the question (line 10). ``` Input: Dataset (\(D\)), Table (\(D_{T}\)), Column (\(D_{T_{c}}\)), Max number of conditions (\(N\)) Output: Question (\(Q\)) Initialization: Pattern format rule (\(PS\)), Genders (\(G\)s) 1. foreach \(D_{T}\) in \(D\) ``` Figure 11: A segment of the "aggregative 6NF" design. 2. foreach \(D_{T_{c}}\) in \(D_{T}\) 3. foreach \(P\)in \(Ps\) 4. foreach \(G\)in \(Gs\) 5. \(\text{conds}=[]\) 6. foreach \(1..N\) 7. foreach \(D_{T_{c}}2\) in \(D_{T}\) 8. If \(D_{T_{c}}\)!= \(D_{T_{c}}2\) 9. \(\text{conds:append}(D_{T_{c}}2)\) 10. yield \(Q\)= \(P(D_{T_{c}}..G\text{conds})\) \begin{table} \begin{tabular}{p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt}} \hline Aggregation Function & Pattern-based rule example & Pattern input & Result \\ \hline COUNT & How many [POPULATION DESCRIPTOR] were [CONDITION 1]... [and/or/between] [CONDITION N] & population descriptor: women condition 1: born in England worked as a dressmaker between 1972 to 1982? \\ MIN & What is the minimum [COLUMN] for [POPULATION DESCRIPTOR] that were [CONDITION 1]... [CONDITION N] & Column/attribute: age of marriage population descriptor: women condition 1: born in Germany between 1850 to 1900? \\ MAX & What is the [COLUMN 1] with the maximum number of [COLUMN 2]s [were/in] [CONDITION 1]... [and/or/between] [and/or/between] [and/or/between] [CONDITION N] & Column 1: year with the maximum number of deaths in Germany \\ AVERAGE & What is the average [COLUMN] of [ POPULATION DESCRIPTOR] & column: age & What is the average age of men that were born in Spain and worked as a \\ \hline \end{tabular} \end{table} Table 2: Pattern examples for question generation in the genealogical domain. To avoid model overfitting and increase the language variability of the questions to better suit real-world applications, a question augmentation method based on a DNN model has to be employed [20]. To this end, the DNN model based on the PEGASUS [86] and trained on the Quora dataset [3] is utilized to paraphrase questions (see some examples in Table 3). How many warriages were in Paris in 1950? What is the minimum age of marriage for women that were born in Germany between 1850 to 1900? How many men were born in Firenze, Italy? What is the year with the maximum number of deaths in Germany? What is the average age of men that were born in Spain and worked as a pharmacist between 1850 to 1950? What is the average number of births when the birth year was greater than 1985? What is the portion of men that had three children? To avoid model overfitting and increase the language variability of the questions to better suit real-world applications, a question augmentation method based on a DNN model has to be employed [20]. To this end, the DNN model based on the PEGASUS [86] and trained on the Quora dataset [3] is utilized to paraphrase questions (see some examples in Table 3). How many warriages were in Paris in 1950? What is the minimum age of marriage for women that were born in Germany between 1850 to 1900? How many men were born in Firenze, Italy? What is the year with the maximum number of deaths in Germany? What is the average age of men that were born in Spain and worked as a pharmacist between 1850 to 1950? What is the average number of births when the birth year was less than 1985? What is the percentage of men with three children? \begin{table} \begin{tabular}{l l l} \hline Pattern-based rule input & PEGASUS paraphrasing output \\ \hline How many women were born in England and worked as a dressmaker between 1972 to 1982? & Between the year 1972 and the year 1982, what was the number of women that held a dressmaker job and lived in England? \\ How many marriages were in Paris in 1950? & How many people got married in 1950 in Paris? \\ What is the minimum age of marriage for women that were born in Germany between 1850 to 1900? & For women from Germany, what is the min marriage age between the year 1850 and the year 1900? \\ How many men were born in Italy? & How many men were born in Italy? \\ Which is the deadliest year in Germany? & What is the life expectancy for pharmacist men from 1850 to 1950 in Spain? \\ What is the average number of births when the birth year was less than 1985? & What is the percentage of men with three children? \\ \hline \end{tabular} \end{table} Table 3: Output examples of the question paraphrasing based on PEGASUS. Finally, the corresponding answer is extracted from the appropriate table using the question conditions and aggregation function. The resulting dataset is stored as a set of tuples of the form: (function, question, answer, table/s) as JSON files. ### DNN-based Table Selection In order to train the QA DNN model on the constructed datasets, table selection has to be performed, since the model can not get all dataset tables as input due to the BERT model input size limit (512 tokens). To this end, the SBERT model was fine-tuned on a subset of the above created datasets of questions and their source tables [58] to calculate the textual similarity between the given question and each of the tables' content in the dataset. SBERT seems suitable for this task since it was originally designed to calculate the similarity between a search query and its corresponding clicked document based on Siamese neural networks. The SBERT model was initially trained on the MS MARCO dataset that comprises 1,010,916 search queries and the corresponding clicked documents/passages [89]. To fine-tune the SBERT model for the genealogical domain, a dataset of 1,383,586 questions created as part of the GenAgg datasets was used; for each question, the source table(s) was saved as a positive (i.e., similar) example, and the other table(s) was saved as a negative (i.e., un-similar) example. Then, the top K (or less) similar tables are selected and joined using a set of join rules per dataset structure, and a dynamic, unified table is generated. If the new dynamic table size contains more than 512 tokens, the user can be asked to reduce the family tree by selecting the relational degrees in the scope using the Gen-BFS algorithm [69], or otherwise, the question can be returned as unanswerable. If the size of the resulting table is suitable for the model, it is set as an input for the QA DNN model. ### DNN-based QA model for the genealogical domain The proposed QA model is based on the weak supervision\({}^{\text{vi}}\) implementation of the state-of-the-art TaPas model [32], a BERT [16] encoder model adapted to the numerical aggregative QA over tables. TaPas has been designed to answer open-domain questions with three types of aggregation functions: count, sum, and average. The model uses a question and a flattened table as input; a table is flattened into a sequence of word pieces (tokens) and concatenated with the question tokens. The encoder model is added with two classification layers for selecting table cells and aggregation operators that operate on the cells. The cell selection classification layer determines whether (i.e., the probability of) a given cell should be used for the aggregative operation or not. The aggregation operator layer selects the numerical operation that is needed to answer the question. The input embeddings matrix comprises: (1) position id (like the BERT's index of the token in the flattened sequence), (2) segment id (0 for the question tokens and 1 for the table tokens), (3) column id (0 for the question tokens, column index for the table tokens), (4) row id (0 for the question tokens, row index for the table tokens), and (5) rank id (0 for non-numeric/date cells, order index for numeric/date cells). The embeddings improve the model's ability to understand the token's representation with respect to the question. \begin{table} \begin{tabular}{l l l l} \hline \hline GLOBE & TaPas & f & compute(f,p\({}_{\text{s}}\),cp\({}_{\text{s}}\),T) \\ \hline \(\checkmark\) & \(\checkmark\) & COUNT & \(\sum c\in T^{p_{\text{s}}(c)}\) \\ \(\checkmark\) & \(\checkmark\) & SUM & \(\sum c\in T^{p_{\text{s}}(c)}\cdot T[c]\) \\ \hline \hline \end{tabular} \end{table} Table 4: The weak supervision implementation of GLOBE aggregation functions (f), where c is the cell scalar value in table T with selection probabilities (p\({}_{\text{u}}\)) for the question and selection probabilities (cp\({}_{\text{s}}\)) for the question population. Missing values are set to 0. As can be noticed from Table 4, GLOBE calculates all the functions implemented in TaPas. Note that min and max operations are considered cell selection operations and do not require any computation. In addition, GLOBE defines and implements a new function, _portion_, that was suggested by two interviewed genealogists as necessary for the genealogical domain. This function allows for computing the portion of a specified group compared to the entire population (e.g., "What is the portion of men married under 25 in the UK Queen's dynasty?"). Adding the portion function to the model's loss introduces a new challenge: determining the entire population relevant to the question. The portion function is calculated as the count of rows matching all the criteria in the question divided by the number of rows matching the entire relevant population. For example, the question above needs to be split into two sub-questions: 1) "What is the total number of men in the Queen's dynasty?" and 2) "What is the total number of men married under 25 in the Queen's dynasty?". The portion is the answer to the second question divided by the answer to the first question (i.e., \(\frac{\mathit{men}\,\mathit{married}\,\mathit{under}\,25}{\mathit{men}}\)). Therefore, when calculating the loss, the first word in the question that matches the population type using a predefined list of population types is extracted, and the model is rerun in a non-trainable mode with a fixed pattern ("how many [POPULATION DESCRIPTOR]?"), forcing only the count function calculation. Then, the population cell probabilities are transferred back to the loss to calculate the portion. When no population values are found in the question or no rows are found in the second forward pass of the model, the portion is considered 0. ## 4 Experimental Design To implement and evaluate the effectiveness of the proposed methodology (i.e., finding the best-performing dataset design and model for answering genealogical numerical questions), a series of experiments were carried out as follows. ### Datasets The datasets in this research contained 1,847,200 different individuals from 3,139 genealogical trees (i.e., GEDCOM files) from the corpus of the Douglas E. Goldman Jewish Genealogy Center in the Anu Museum". The Anu Museum collection holds over 5 million different individuals (i.e., nodes) with over 30 million connections (i.e., edges) between persons, families, places, and various multimedia items. The datasets in this research contained genealogical trees that the Anu Museum holds consent and rights to publish online, complying with the European general data protection regulation" (GDPR), and under the Israeli privacy regulation". Furthermore, the records of living individuals were removed from the datasets as much as possible. All personal data and any data that can be used to identify a person in this paper, including in the tables and figures, have been modified to protect the privacy of these individuals. Based on the filtered GEDCOM files from the above corpus, and after removing some files with parsing or encoding errors, six datasets were generated for each dataset design described above. All datasets were split into training (60% - 941,809 questions), test (20% - 220,889 questions), and evaluation (20% - 220,888 questions) sets. It is worth mentioning that there was a difference in the efficiency of the dataset generation (i.e., the time that it takes to generate the dataset) between different dataset designs (e.g., GenAgg\({}_{\text{GEN}}\) requires more JOIN operations compared to GenAgg\({}_{\text{agg}}\)); however, since the time of the dataset generation process is a small portion (less than 5%) of the time of the training process, it does not have a significant influence on the overall training efficiency. ### Validation dataset creation Due to the fact that the paraphrased questions in the dataset were generated automatically, it may contain errors. Therefore, a manually crafted dataset was created for the models' validation. To this end, a crowdsourcing campaign using Amazon Mechanical Turk was launched. A dedicated website was built that displayed randomly selected questions from the dataset along with the paraphrased versions of these questions. Each crowd worker evaluated up to ten items (i.e., pairs of original question vs. paraphrased question), and, for each item, answered the following questions: 1. Is the paraphrased question grammatically correct? 2. Does the paraphrased question preserve the meaning of the original question? Before performing the task, each crowd worker got a "test task", as shown in Figure 12, to ensure they understood what needed to be done. If the crowd workers failed the test, they could not continue to perform the actual task. Each item was evaluated by three independent crowd workers, and a paraphrased question was considered correct / meaning preserving, only if at least two of the three crowd workers marked it as such. A total number of 667 crowd workers, who passed the test, were recruited and judged a random sample of 896 pairs of questions (1,792 questions in total) from the evaluation dataset. All participants signed a letter of consent before participating in the experiment. The IRB of the Faculty of Humanities at Bar-Ilan University has approved this study. Figure 12 presents the website's user interface that was built for this study. ## GLOBE paraphrased questions validation - You have done 0 tasks out of 10 Figure 12: Crowdsourcing website for building a validation dataset. ### Fine-tuning the GLOBE QA model The GLOBE numerical aggregative QA DNN model was first trained on the SQA [36] and WikiSQL datasets [87], replicating the TaPas training process. Then, the GLOBE QA model was fine-tuned using the generated training datasets, referred to as GenAgg. Each table in the datasets was lowercased and tokenized using WordPiece [83]. To evaluate the effect of the dataset design on the model's accuracy, the table selection DNN models and the numerical aggregative QA DNN models were trained on 1,383,586 questions for each of the six datasets. All the models were trained with the hyperparameters (similar to the TaPas hyperparameters) shown in Tables 5 and 6. ### Accuracy metrics To assess the models' accuracy, the following standard measures were employed in this study: 1. \(Answerable_{acc}\) is the average accuracy of the model on questions that the model is able to answer (i.e., the tables required to answer these questions fit the model's input size). Accuracy is a common metric for QA systems [64]. 2. \(Total_{acc}\)is the total accuracy (for both answerable and unanswerable questions) calculated as follows: \(Total_{acc}=Answerable_{acc}*(1-\frac{Answerable_{questions}}{Total_{ questions}})\) where \(Total_{questions}\) is the overall number of questions in the dataset. \begin{table} \begin{tabular}{l l} \hline Hyperparameters & Value \\ \hline Max sequence tokens & 512 \\ Batch size & 32 \\ Training Steps & 25,000 \\ Learning rate & 5e-5 \\ Epocs & 8 \\ \hline \end{tabular} \end{table} Table 6: Training hyperparameters of GLOBE QA DNN. \begin{table} \begin{tabular}{l l} \hline Hyperparameters & Value \\ \hline Max sequence tokens & 512 \\ Batch size & 18 \\ Training Steps & 25,000 \\ Loss & cosine similarity \\ Epocs & 10 \\ \hline \end{tabular} \end{table} Table 5: Training hyperparameters for table selection DNN based on SBERT. In addition, for each of the above measures, we use both exact and soft evaluation approaches from previous research [32]. In particular, for a given question \(q\), the answer \(\hat{y}\) returned by the model, and the corresponding correct answer \(y\), the exact and soft accuracy scores are computed as follows: \[Exact_{acc}(\hat{y},y)=\begin{cases}1&,\qquad\text{if }\hat{y}=y\\ 0&,\qquad\text{if }\hat{y}\neq y\end{cases}\] \[Soft_{acc}(\hat{y},y)=\begin{cases}1&,\qquad\text{if }\hat{y}=y\\ 0&,\qquad\text{if }\hat{y}\text{ is not a number}\\ 1-\frac{|\hat{y}-y|}{\max(\hat{y},y)}&,\qquad\text{otherwise}\end{cases}\] ## 5 Experimental Results To evaluate the proposed methodology, we computed the various models' accuracy on the evaluation dataset (i.e., 20% of the dataset that was not part of the training dataset). As a baseline for the comparative model evaluation, TaPas was trained on the GenAggit dataset (since it can only handle a single table per question-answer). As expected, it yielded a very low total accuracy (16-21%). As can be seen in Table 7, rich tables with predefined aggregations, as in GenAggevents improve the model's accuracy since they include the aggregations that otherwise need to be inferred by the model. However, joining data also leads to a high percentage of unanswerable questions (as for GenAggaug and GenAggevent), due to the fact that these large tables are often larger than the model's input size limit. When answering multi-table questions, the 6NF design creates simple and focused tables based on the given question, which dramatically increases the model's accuracy. Moreover, this design also reduces the table size and allows the model to answer almost any question. Overall, the 6NF design achieved the highest total accuracy (58-87%), which is 3-4 times higher than the baseline and other models. Furthermore, the dramatic increase in accuracy for all the models when using soft evaluation shows that when the GLOBE QA models are wrong, their predictions are very close to the correct answer. \begin{table} \begin{tabular}{l l l l l l l l} \hline QA model & Table selection & Dataset & Exact accuracy of & Soft accuracy of & Unanswerable & Total exact & Total soft \\ & model & & answerable questions & answerable questions & questions & accuracy & accuracy \\ \hline TaPas\({}_{1t}\) & NA & GenAggit & 23.49\% & 30.57\% & 31\% & 16.21\% & 21.09\% \\ GLOBE\({}_{1t}\) & NA & GenAggit & 24.95\% & 32.47\% & 20\% & 19.96\% & 25.97\% \\ GLOBE\({}_{\text{raw}}\) & SBERT\({}_{\text{raw}}\) & GenAggraw & 24.48\% & 30.76\% & 51.6\% & 11.85\% & 14.89\% \\ GLOBE\({}_{\text{rel}}\) & SBERT\({}_{\text{rel}}\) & GenAggrel & 25.67\% & 32.67\% & 48\% & 13.35\% & 16.98\% \\ GLOBE\({}_{\text{agg}}\) & SBERT\({}_{\text{agg}}\) & GenAggAgg & 47.52\% & 77.72\% & 78.7\% & 10.12\% & 16.55\% \\ GLOBE\({}_{\text{event}}\) & SBERT\({}_{\text{event}}\) & GenAggevent & 44.47\% & 64.41\% & 63\% & 16.45\% & 23.83\% \\ \hline \end{tabular} \end{table} Table 7: GLOBE models weak supervision accuracy It is worth mentioning that the accuracy of the models depends on both the answerable questions' rate (i.e., questions that the model can answer with respect to its input size limit) and the accuracy of the answers. To further improve the model's performance, we investigated the possible factors that could affect prediction accuracy. Thus, the quality of the question dataset may have a crucial effect on the model's results. The question dataset was automatically generated from the data tables based on predefined patterns and a paraphrasing DNN model. As shown in Table 8, the crowd workers' evaluation of the paraphrased questions in the dataset indicated that only 46.3% of them were grammatically correct and preserved the meaning of the original question. While the model should overcome some grammatical errors, it is harder to handle a change in the meaning of the question (e.g., when a condition is removed or modified). To eliminate the impact of incorrect questions, the best GLOBE QA model (i.e., GLOBE\({}_{\text{enF}}\)) was validated only on questions marked as correct by the crowd workers. As shown in Table 9, there is a substantial increase (of over 20%) in the total exact accuracy when using high-quality questions. However, the small effect on the soft accuracy suggests that low-quality data divert the model prediction "just a bit" compared to the prediction on high-quality data. Moreover, the fact that there is only a small difference between the questions with correct grammar and questions with grammatical errors demonstrates the model's ability to overcome most grammatical errors. The validation accuracy displayed in Table 9 is the expected accuracy of the model in real-world applications with questions written by humans. \begin{table} \begin{tabular}{p{113.8pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt}} \hline Evaluation value & Paraphrased question is grammatically correct & Paraphrased question preserves the meaning of the original question & Paraphrased question is both grammatically correct and meaning preserving \\ \hline Yes & 635 & 604 & 415 \\ No (otherwise) & 261 & 292 & 481 \\ \% \% correct & **70.8\%** & **67.4\%** & **46.3\%** \\ \hline \end{tabular} \end{table} Table 8: Crowd worker’s evaluation results on the paraphrased question generation. \begin{table} \begin{tabular}{p{113.8pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt}} \hline Dataset & Questions & Total exact accuracy & Total soft accuracy \\ \hline GenAgg\({}_{\text{enF}}\) evaluation & 220,889 & 58.57\% & 87.04\% \\ Validation dataset - preserved meaning & 1,208 & 78.84\% & 87.52\% \\ Validation dataset - preserved meaning and correct grammar & **830** & **80.15\%** & **87.52\%** \\ \hline \end{tabular} \end{table} Table 9: GLOBE\({}_{\text{enF}}\) validation accuracy. Another possible influence factor on the model's accuracy could be the selected table quality. We assessed the accuracy of the table selection model using the evaluation dataset. As shown in Table 10, the simpler the dataset (i.e., fewer tables), the better the table selection model accuracy. Interestingly, one exception is the 6NF database, which contains a large number of tables. It is worth mentioning that in many cases, even if the table selection model was wrong, the QA model was able to predict the exact correct answer, since for many questions (in some datasets), there is more than one table that contains the answer. For instance, for the question "How many children do London residents have on average?", the answer can be calculated by counting the children of the fathers/mothers or by counting the fathers/mothers of the children. Moreover, there was no difference in accuracy when eliminating the DNN-based table selection errors by inputting only the correct tables into the model. The aggregative operation may also be a possible influence factor on the model's accuracy. As can be seen in Table 11, there was a difference between different mathematical operations. While the exact accuracy is similar, there is a considerable difference in the soft accuracy. Operations with high soft accuracy (such as Count) show that the predictions were close to the correct answers even when the model was wrong. However, no single operation shows a significantly better (or worse) accuracy than the other operations. \begin{table} \begin{tabular}{l l l l} \hline Model & Dataset & Total number of tables & Accuracy \\ \hline SBERT\({}_{\text{raw}}\) & GenAgg\({}_{\text{raw}}\) & 2 & 63.93\% \\ SBERT\({}_{\text{rel}}\) & GenAgg\({}_{\text{rel}}\) & 4 & 63.47\% \\ SBERT\({}_{\text{agg}}\) & GenAgg\({}_{\text{agg}}\) & 4 & 61.26\% \\ SBERT\({}_{\text{event}}\) & GenAgg\({}_{\text{event}}\) & 5 & 60.26\% \\ SBERT\({}_{\text{6NF}}\) & GenAgg\({}_{\text{6NF}}\) & 77 & **87.5\%** \\ \hline \end{tabular} \end{table} Table 10: Table selection model accuracy. Table 12 illustrates the accuracy of the top GLOBE models (i.e., GLOBE\({}_{\text{GNF}}\), GLOBE\({}_{\text{event}}\), GLOBE\({}_{\text{agg}}\)). The table presents anecdotal examples of questions answered by the models. While the best model (i.e., GLOBE\({}_{\text{GNF}}\)) has the highest overall accuracy, in some cases, other models were able to provide correct answers to the questions for which the GLOBE\({}_{\text{GNF}}\) failed. However, no clear pattern was found for the questions that the GLOBE\({}_{\text{GNF}}\) failed to answer correctly (while other models succeeded). A deeper error analysis shows that, in some cases, when the question contains one number, and there is a cell in the table with that number, the models predict this cell as the only relevant one; this may suggest that the models need further training. When the size (i.e., the sequence length) of the tables required to answer the question is larger than the model's input size, the question is marked as unanswerable. Another type of errors occurs when the models fail to predict the aggregation function, thus yielding results that are not in the expected range (e.g., numbers larger than one for portion questions or lower than one for count questions). Finally, to evaluate the added complexity of the GLOBE QA model (i.e., the effect of adding the portion aggregation function), both TaPas and GLOBE QA models were trained and evaluated on SQA (one of the open-domain datasets tested in [32]) that does not contain portion questions. When comparing the TaPas and GLOBE\({}_{\text{1t}}\) model's exact accuracy\({}^{\text{x}}\) on the SQA dataset (with weak supervision), only a slight insignificant difference was observed (78.1% vs. 77.8%). \begin{table} \begin{tabular}{l l l l l} \hline Question & GLOBE\({}_{\text{GNF}}\) & GLOBE\({}_{\text{event}}\) & GLOBE\({}_{\text{agg}}\) & Correct answer \\ \hline What is the average age of people born in Argentina? & 65.0 & unanswerable & unanswerable & 65.0 \\ What is the average age of people with the first name & 45 & 45 & 45 & 60.4 \\ Sara and age of 45? & & & & \\ How many females first name is Shira? & 1 & 0.96 & 1 & 1 \\ What is the portion of females born in POLAND? & 0.1428 & 0.13 & 0 & 0.1410 \\ The maximum age of a person in Germany? & 105 & unanswerable & unanswerable & 105 \\ What is the portion of women with the last name & 0.02 & 24 & 24 & 0.006 \\ Gershon and the age on the first child of 24? & & & & \\ What is the portion of people with the last name & 0.15 & 0.142 & 0.142 & 0.1493 \\ Gershon? & & & & \\ How many people were born on average every year & 11.2 & 14.1 & unanswerable & 14.1 \\ between 1850 to 1880? & & & & \\ How many people’s birthplace is Kurdistan? & 3 & 3 & unanswerable & 3 \\ \hline \end{tabular} \end{table} Table 12: Answer prediction by the GLOBE models. ## 6 Discussion and Conclusions This paper outlines and implements an end-to-end multi-phase methodology for a novel challenging task of numerical aggregation QA in the genealogical domain. The presented methodology was evaluated using a large corpus of 1,847,200 different persons. The obtained results show that while on generic data (e.g., SQA), the GLOBE QA model seems to have no benefit, on geological data it outperformed the state-of-the-art model and achieved 80-87% accuracy. It also effectively implemented a new aggregation function highly popular for the genealogical research, portion, that was not supported in the previous research. This finding shows that the genealogy domain is distinctive in complexity, characteristics and requirements, and thus needs dedicated training methods, data modeling, and fine-tuned DNN models. The study also examined the dataset design's effect on the QA model's accuracy. The results show that the complexity of the genealogical domain requires a more complex pipeline that can split and reconstruct the data tables based on a question, where the most effective design is based on a 6NF approach. As expected, the results also indicate the importance of high-quality data and the negative effect of errors of automatic data augmentation (question paraphrasing) on the model's accuracy. In summary, this study's contributions are: (1) the optimal table-based dataset design for the numerical aggregation QA for the genealogical domain (GenAgggNF); (2) an automated method for the training dataset generation for the genealogical domain; and (3) optimal fine-tuned table selection and numerical QA models for the genealogical domain (GLOBE QA). The study may also have a substantial societal impact as genealogical centers, museums and various commercial organizations aim to allow users and experts without mathematical or programming training to investigate their large family tree databases. For example, imagine walking into a genealogical center and researching your own dynasty migration paths by asking questions like "How many people from my family were born in England but died in another county?", or a Holocaust researcher asking "What is the average marriage age of women in Germany between 1919-1939?" and then comparing the answer to the answer to the question "What is the average marriage age of women in Germany between 1945-1965?". To answer a variety of questions, such systems should incorporate factual and numerical aggregation QA based on the data stored within the GEDCOM files. Furthermore, practical and scientific implications of this study for the genealogical domain can be researching communities, migration, plaques, and marriage cultures all over the globe. Future research may focus on (1) combining different methods for overcoming the sequence length limitation of the DNN models (e.g., [18, 19, 42, 85]), (2) improving the system's accuracy by developing a model selection or a multi-model method for various question types, (3) training other models (and performing hyperparameter optimization) on genealogical data to further optimize numerical QA models for the genealogical domain, (4) developing and comparing the proposed method to GNN model [90] by embedding nodes [91, 92] per relation degree [69], (5) adapting text-to-text models to present an explanation to the predicted answers [61, 52], (6) a deep error analysis of the models to identify possible improvements or using the models as a mix of experts (i.e., using different models for different questions or graphs), and (7) building the optimal user interface for the task of genealogical question-answering. Finally, in addition to the numerical aggregation QA task, the developed end-to-end methodology can also be applied to other downstream genealogical NLP (Natural Language Processing) tasks, including entity extraction, summarization, and classification. ## 7 Acknowledgments This work was partially supported by a grant from the Israel data science initiative (IDSI). The data of this work was granted for use by the Douglas E. Goldman Jewish Genealogy Center of Anu Museum with permission to publish statistical results.
2304.08566
GrOVe: Ownership Verification of Graph Neural Networks using Embeddings
Graph neural networks (GNNs) have emerged as a state-of-the-art approach to model and draw inferences from large scale graph-structured data in various application settings such as social networking. The primary goal of a GNN is to learn an embedding for each graph node in a dataset that encodes both the node features and the local graph structure around the node. Embeddings generated by a GNN for a graph node are unique to that GNN. Prior work has shown that GNNs are prone to model extraction attacks. Model extraction attacks and defenses have been explored extensively in other non-graph settings. While detecting or preventing model extraction appears to be difficult, deterring them via effective ownership verification techniques offer a potential defense. In non-graph settings, fingerprinting models, or the data used to build them, have shown to be a promising approach toward ownership verification. We present GrOVe, a state-of-the-art GNN model fingerprinting scheme that, given a target model and a suspect model, can reliably determine if the suspect model was trained independently of the target model or if it is a surrogate of the target model obtained via model extraction. We show that GrOVe can distinguish between surrogate and independent models even when the independent model uses the same training dataset and architecture as the original target model. Using six benchmark datasets and three model architectures, we show that consistently achieves low false-positive and false-negative rates. We demonstrate that is robust against known fingerprint evasion techniques while remaining computationally efficient.
Asim Waheed, Vasisht Duddu, N. Asokan
2023-04-17T19:06:56Z
http://arxiv.org/abs/2304.08566v2
# GFOVe: Ownership Verification of Graph Neural Networks using Embeddings ###### Abstract. Graph neural networks (GNNs) have emerged as a state-of-the-art approach to model and draw inferences from large scale graph-structured data in various application settings such as social networking. The primary goal of a GNN is to learn an _embedding_ for each graph node in a dataset that encodes both the node features and the local graph structure around the node. Embeddings generated by a GNN for a graph node are unique to that GNN. Prior work has shown that GNNs are prone to model extraction attacks. Model extraction attacks and defenses have been explored extensively in other non-graph settings. While detecting or preventing model extraction appears to be difficult, deterring them via effective _ownership verification techniques_ offer a potential defense. In non-graph settings, _fingerprinting_ models, or the data used to build them, have shown to be a promising approach toward ownership verification. We present GFOVe, a state-of-the-art GNN model fingerprinting scheme that, given a _target_ model and a _suspect_ model, can reliably determine if the suspect model was trained independently of the target model or if it is a _surrogate_ of the target model obtained via model extraction. We show that GFOVe can distinguish between surrogate and independent models even when the independent model uses the same training dataset and architecture as the original target model. Using six benchmark datasets and three model architectures, we show that GFOVe consistently achieves low false-positive and false-negative rates. We demonstrate that GFOVe is _robust_ against known fingerprint evasion techniques while remaining computationally _efficient_. Graph Neural Networks, Model Extraction, Ownership Verification. + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + + Footnote †: journal: Computer Science
2303.09634
Causal Temporal Graph Convolutional Neural Networks (CTGCN)
Many large-scale applications can be elegantly represented using graph structures. Their scalability, however, is often limited by the domain knowledge required to apply them. To address this problem, we propose a novel Causal Temporal Graph Convolutional Neural Network (CTGCN). Our CTGCN architecture is based on a causal discovery mechanism, and is capable of discovering the underlying causal processes. The major advantages of our approach stem from its ability to overcome computational scalability problems with a divide and conquer technique, and from the greater explainability of predictions made using a causal model. We evaluate the scalability of our CTGCN on two datasets to demonstrate that our method is applicable to large scale problems, and show that the integration of causality into the TGCN architecture improves prediction performance up to 40% over typical TGCN approach. Our results are obtained without requiring additional domain knowledge, making our approach adaptable to various domains, specifically when little contextual knowledge is available.
Abigail Langbridge, Fearghal O'Donncha, Amadou Ba, Fabio Lorenzi, Christopher Lohse, Joern Ploennigs
2023-03-16T20:28:36Z
http://arxiv.org/abs/2303.09634v1
# Causal Temporal Graph Convolutional Neural Networks (CTGCN) ###### Abstract. Many large-scale applications can be elegantly represented using graph structures. Their scalability, however, is often limited by the domain knowledge required to apply them. To address this problem, we propose a novel Causal Temporal Graph Convolutional Neural Network (CTGCN). Our CTGCN architecture is based on a causal discovery mechanism, and is capable of discovering the underlying causal processes. The major advantages of our approach stem from its ability to overcome computational scalability problems with a divide and conquer technique, and from the greater explainability of predictions made using a causal model. We evaluate the scalability of our CTGCN on two datasets to demonstrate that our method is applicable to large scale problems, and show that the integration of causality into the TGCN architecture improves prediction performance up to 40 % over typical TGCN approach. Our results are obtained without requiring additional domain knowledge, making our approach adaptable to various domains, specifically when little contextual knowledge is available. Graph Neural Networks, Causal Inference, Time Series, Scaling AI, Spatiotemporal ## 1. Introduction Numerous real-world processes contain interconnected dynamics characterised by a complex organizational structure. Examples include environmental systems, energy grids, and epidemic outbreaks. Many classical mathematical frameworks exist to model these dynamics such as Navier-Stokes or convective-diffusion equations. While these explicitly encode known relationships, machine learning provides opportunity to resolve established and latent dynamics. Graphs have been historically used to model many of the underlying structures and physical behaviours of spatial systems such as road networks (Srivastava et al., 2017), building thermodynamics (Srivastava et al., 2017), and water and energy grids (Koshelev et al., 2017). This allows practitioners to explicitly encode their domain knowledge. A fundamental assumption in these models is that the underlying structure of these dynamics are well known to be modelled. But, most real-world applications do not admit a-priori knowledge of these structures. Therefore, we aim to create a graph model that learns this structure using causal discovery. Recent Temporal Graph Convolutional Neural Networks (TGCN) (Koshelev et al., 2017; Goyal et al., 2017) utilise this available domain knowledge in graphs by combining learning over temporal and graphical features. These TGCN models also assume that spatial information or similar prior knowledge regarding connections is available (Goyal et al., 2017; Goyal et al., 2017). However, in practise, many use cases for spatiotemporal modelling do not have well-defined graph structures. In contrast, in most practical applications we see clients collect new timeseries data faster than they are able to specify contextual information which grows the problem. This lack or uncertainty of a-priori spatial knowledge is intensified for downstream tasks of the TGCN. First, incorrect graph models will strongly influence the prediction performance of the model limiting optimization and diagnostic tasks. Second, recent works have investigated the efficacy of post-hoc explanations for graph convolutional neural network (GCNN) predictions (Sriramulu et al., 2019), with some approaches using causal inference methods (Han et al., 2019), but these fundamentally rely on the correctness and completeness of the graph over which they are explaining. Some work has investigated treating the graph as a learnable parameter to optimise for a given downstream task (Sriramulu et al., 2019; Chen et al., 2020). Sriramulu et al. (Sriramulu et al., 2019) proposed an efficient method to construct a dynamic dependency graph based on statistical structure learning models. Prominent limitations of these methods is that they are prone to learning spurious correlations and not the underlying causes. Other work looks into identifying the graph from physics equations from scientific papers utilizing NLP approaches (Chen et al., 2020), but it assumes that systems exhibit static physical behaviour. Also traditional Bayesian structure learning approaches are often not applicable to high-dimensional, real-world data due to their computational cost and unrealistic assumptions which include stationarity, the absence of latent confounders in data, and that none of the underlying relationships are contemporaneous (Kang et al., 2019). In this work, we present a novel, scalable method for deducing causal relationships in large observational data, which we integrate into a TGCN architecture to overcome the limitation of requiring a-priori domain knowledge. This gives our causal-informed TGCN (CTGCN) visibility of the underlying structural causal model in an automated way and facilitates its self-adaptation and scalability in diverse large-scale applications. Furthermore, we investigate the computational complexity and predictive skill of the causal discovery process in order to improve the performance for large scale, real-world applications and non-IID datasets. We further study the effects of decomposing the causal discovery problem on different downstream predictive task, as we posit that integrating structural causal models into the graph convolution layer of a TGCN model improves model performance, robustness and explainability. The main contributions of this work are: * We develop a novel causal-informed TGCN discovery method in order to facilitate adoption for large-scale, interconnected, and complex applications. * We extend existing causal discovery methods with spatial and temporal decomposition to improve their scalability for large-scale applications. * We demonstrate that the integration of causal structure information into predictive models through a graph convolution layer improves forecasting performance. ## 2. Related Work First, we provide a background on spatiotemporal graph convolutional neural networks (GCNN). We then discuss the utility of existing causal inference methods for large-scale data. ### Spatiotemporal GCNN Our approach relies on GCNN (Han et al., 2019), initially introduced by Bruna et al. (Bruna et al., 2020), and extended by Duvenaud et al. (Duvenaud et al., 2020) with fast localised convolutions. Approaches such as TGCN (Han et al., 2019; Sriramulu et al., 2019; Sriramulu et al., 2019) augment this method with sequence-to-sequence learning to encode temporal dynamics. The penetration of GCNN into sensor-driven applications using time series data led to the extension of GCNN with sequence-to-sequence learning methods such as Gated Recurrent Unit (GRU) or Long Short-Term Memory (LSTM) for forecasting applications (Kirshman et al., 2019). The combination of GCNN with sequence-to-sequence learning methods is generally motivated by the need to simultaneously capture spatial and temporal dependencies. In this case, the GCNN is used to capture spatial dependencies by building a filter that acts on the nodes and their \(n\)-th order neighbourhood (usually first order). This enables the filter to capture spatial features between the nodes, and further the GCNN can be developed by stacking multiple convolutional layers. The sequence-to-sequence learning method is employed to capture dynamic changes of the systems. However, modelling complex topological structures with sequence-to-sequence learning usually fails in capturing a system's underlying causal processes and thus restricts the input-output relationships to correlations. Scholkopf highlights the robustness of similar approaches as a key problem, suggesting the integration of causal modelling in ML could overcome this (Scholkopf, 2017). Many real-world problems that we are interested in studying violate the IID assumption, which underpins many correlational approaches. We propose an extension of the TGCN architecture that introduces a scalable causal convolution to capture the characteristics of the underlying system. ### Causal Discovery Time-series data has long been a challenging problem in causal discovery: while the canonical order of data facilitates the directing of causal links (thus overcoming Markov equivalence), strong autocorrelation and the presence of unobserved confounders reduces detection power (Kal To evaluate the performance of our approach, we define the spatiotemporal forecasting problem as learning the mapping function f using the structural information provided by the adjacency matrix \(A\) and the features \(\mathcal{X}\). A GCNN model constructs the mapping function f as filter in the Fourier domain. The filter then acts on the nodes of the graph and its first order neighbourhood. This allows the topological structure and the spatial features between the nodes to be captured. Subsequently, the GCNN model can be established by stacking multiple convolutional layers. The GCNN is given by \[\text{f}\left(\mathcal{X},A\right)=\sigma\left(\hat{A}\ \operatorname{ReLU} \left(\hat{A}\mathcal{X}W^{(0)}\right)\ W^{(1)}\right), \tag{1}\] where \(\mathcal{X}\) is the feature matrix, \(A\) represents the adjacency matrix, \(\hat{A}=\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2}}\) is a preprocessing step, \(\tilde{A}=A+I_{N}\) is a matrix that considers the features of the nodes for which the learning is conducted, \(\tilde{D}\) is a degree matrix, where \(\tilde{D}=\sum_{j}\tilde{A}_{ij}\), \(W^{(0)}\) and \(W^{(1)}\) represent the weight matrix in the first and second neighborhood, and \(\sigma\), and \(\operatorname{ReLU}\) represent the activation functions. ### Causal Inference of the Adjacency Matrix To solve the above GCNN problem, we need to identify the adjacency matrix \(A\). We frame this as a causal learning problem on the structure of the underlying causal processes: \[\mathcal{X}_{t}^{j}=\mathrm{g}_{j}\left(\mathcal{P}(\mathcal{X}_{t}^{j}),\eta _{t}^{j}\right), \tag{2}\] where \(\mathrm{g}_{j}\) are arbitrary measurable functions that depend non-trivially on the causal parents \(\mathcal{P}\) of the given node \(j\), and \(\eta\) is independent noise obscuring the causal processes. As described in Section 1.3, there are various methods for discovering these causal relationships. In this work, we adapt the constraint-based method PCMCI\({}^{\star}\) presented by Runge (2007). This method increases the detection power over highly autocorrelated data compared to seminal methods (Kang et al., 2017), and its ability to detect contemporaneous links makes it particularly suitable for dealing with noisy real-world spatiotemporal data. Causal discovery methods such as PCMCI\({}^{\star}\) are bound by assumptions that the observations are faithful to and fully representative of the underlying processes, and that these processes are stationary and acyclic. PCMCI\({}^{\star}\) is based on Runge's definition of a momentary conditional independence (MCI) test (Kang et al., 2017) that, for each lagged observation \(\mathcal{X}_{t-T}^{j}\), \(\mathcal{X}_{t}^{k}\) of a feature pair \(j,k\), tests for the existence of a causal relationship given a lag \(\tau\in(1,\ldots,\tau_{max})\). If the \(p\)-value of the test is above the significance threshold \(\alpha\), we consider a binary causal relationship between \(\mathcal{X}_{t-\tau}^{j}\), \(\mathcal{X}_{t}^{k}\) where \(c_{j,k,t,\tau}=\mathrm{L}(p_{j,k,t,\tau}>\alpha)\) with the logical function \(L(b)\) which is \(1\) if condition \(b\) evaluates true and \(0\) otherwise. Figure 1. Block diagram of the proposed causal TGCN architecture. The biggest limitation of the approach is its scalability. As an MCI test is computed for each lagged combination of features, we observe a worst-case time complexity of \(\mathcal{O}(P\cdot((N\cdot\tau_{max})^{2}+e^{N}))\) for PCMCI\({}^{\star}\). While this is a significant improvement on the original PC algorithm's worst case \(\mathcal{O}(P\cdot e^{N\cdot\tau_{max}})\), it is still intractable for large \(N\) and \(P\). The choice of CI tests also affects the runtime, with methods more robust to non-linear relationships, and therefore more generalisable, increasing computational cost. ### Temporal Decomposition A key limitation of existing causal discovery methods is the assumption of stationarity, which is unrealistic for data spanning months or years and encompassing varying temporal dynamics. To overcome this, and improve the scalability of our method, we propose splitting the data along the time axis into approximately stationary periods with \(P_{T}\) observations. These periods are typically derived based on generalised domain expertise or statistical analysis of exogenous features. Human systems such as buildings, energy, transport, or finance often exhibit daily, weekly, or monthly patterns, while natural systems such as oceans, agriculture, and weather often exhibit daily, seasonal, or annual dynamics. ### Spatial Decomposition Modern systems in manufacturing, smart cities, and the automotive industry are highly monitored with thousands of IoT devices. The large spatial dimensionality is computationally challenging for existing CI algorithms. Intelligent spatial decompostion can dramatically reduce computational expense. Many decomposition approaches exist such as statistical, domain-inspired, or rule-based methods. Dynamic time warping (DTW) is a pattern-matching approach to the alignment of time-series data first proposed for speech recognition by Sakoe & Chiba (2018). It has more recently been popularised as a method for the unsupervised classification of large temporal data in applications from finance to astronomy (Bahdan et al., 2017). The principal advantage of the DTW approach over Euclidean distances lies in the fact that one-to-many relationships can be mapped between candidate timeseries, which facilitates the identification of shared trends even if they occur on different timescales. By utilising unsupervised DTW clustering on the feature axis, we can decompose the causal discovery problem into several sub-problems which we solve independently. We evaluate PCMCI\({}^{\star}\) within each cluster and for each temporal period. This divide-and-conquer approach reduces the worst-case complexity of PCMCI\({}^{\star}\) to \(\mathcal{O}(D\cdot P_{T}\cdot((N_{C}\cdot\tau_{max})^{2}+e^{N_{C}}))\) with \(P_{T}\) being the temporal decomposition period, \(D\) being the cluster number and \(N_{C}\) being the maximum cluster size. We posit that by clustering in this way, we minimise the number of cross-cluster relationships in the underlying causal model, and therefore maximise the recall of the decomposed problem. Further, as DTW clustering is an unsupervised method, this decomposition requires few additional parameters to be run automatically as shown in Table A3. ### Adjacency Matrix Construction The causal discovery steps outlined above produce results \(c_{j,k,t,\tau}=\mathrm{L}(p_{j,k,t,\tau}>\alpha)\) for each \(\tau\)-lagged timestep \(t\) and detect causal relationships which span from contemporaneous (\(\tau=0\)) up to a maximum lag (\(\tau_{max}\)) which exceed some significance threshold \(\alpha\). These parameters and their selection are summarised in Table A1. After the temporal and spatial decomposition we retrieve \(c_{j,k,t,\tau}\) for all temporal and spatial clusters and aggregate them in form of our causal adjacency matrix \(A\). We first aggregate all test results within each temporal \(\mathcal{T}\) and spatial \(\mathcal{S}\) sample set and perform a majority vote \[\hat{c}_{j,k,\mathcal{T},\mathcal{S}}=\operatorname{M}\!\left(\sum_{t=0}^{P_{T}} \sum_{\tau=0}^{\min(t,\tau_{max})}\frac{1}{P_{T}(\tau_{max}-1)}c_{j,k,t,\tau}\right) \tag{3}\] with the majority voting function \(\operatorname{M}\!\operatorname{V}(v)\) that is \(1\) if \(v>0.5\) and \(0\) otherwise. This filters out causal relationships that were only discovered in specific time steps, but, are not common within a temporal and spatial sample set. We then aggregate the votes from all temporal and spatial sample sets in our causal adjacency matrix \(A\). We use the following aggregation strategies: \[a_{j,k}^{\operatorname{ANY;W}}=\sum_{\mathcal{S}}\sum_{\mathcal{T}}\hat{c}_{ j,k,\mathcal{T},\mathcal{S}}, \tag{4}\] The weighted aggregation \(a_{j,k}^{\operatorname{ANY;W}}\) computes the total sum of causality test results. The weighted majority vote \(a_{j,k}^{\operatorname{MT;W}}\) considers the weight of relationships occurring in a majority of sample sets in the number \(T\) of temporal sets. The unweighted aggregations \(a_{j,k}^{\operatorname{ANY;UW}}\) and \(a_{j,k}^{\operatorname{MT;UW}}\) reduce the weighted adjacency matrix to a binary one using the binary function \(\operatorname{B}(v)\) that is \(1\) if \(v>0\) and \(0\) otherwise. If we do not model the directed behaviour of the system, we can simplify this to an undirected matrix via \(a_{j,k}=\operatorname{B}(a_{j,k}+a_{k,j})\). ### Evaluating Performance Forecasting is a common task in spatiotemporal applications for monitoring, anomaly detection, optimisation, etc. We consider the task of windowed forecasting, where a model is trained to infer the subsequent \(q\) measurements given a finite history of length \(\lambda\). Despite the growing popularity of sequence-to-sequence learners for forecasting applications, we consider the foundational case of a simple one-dimensional convolution along the time axis to capture the temporal behaviour of the system. This has the dual benefit of reducing training overhead for improved scalability and clearly demonstrating the effectiveness of our method even with a simplistic architecture. The simpler network structure also enables explainability (Srivastava et al., 2017) that is important for practical applications. We design a forecasting architecture which begins with a temporal convolution, followed by the causal graph convolution layer. We posit that this ordering allows the temporal dynamics of the system to be captured at each node, and then these distilled features can be used to inform forecasting at causally-related nodes. The TGCN we employ builds on Kipf & Welling (2014) and is implemented in PyTorch Geometric (Kipf and Welling, 2021). A final linear layer produces a forecast of length \(q\). The model is trained in batches using RMSE loss. Hyperparameters are tuned for each dataset using the adaptive grid search method provided by Optuna (2021). ## 4. Experiments We demonstrate the performance of our method in two contexts, selected to demonstrate the generalisability of the approach. Experiments were conducted on an Apple M1 Max 32GB MacBook Pro (2021) running MacOS Monterey 12.6 and Python 3.9.12. Code to replicate our experiments is provided at [https://tinyurl.com/ctgcn-code](https://tinyurl.com/ctgcn-code) and will be open-sourced ### Building Heating System The first dataset is heterogeneous and multivariate, and generated from a simulated heating, ventilation, air conditioning (HVAC) system which controls the temperature of two rooms and an interconnecting corridor (Zhou et al., 2017). It consists of data from thirty sensors measuring the system variables including the internal status of the system's boiler, chiller and ventilation units, the temperature within each room and externally, and room occupancy. The data is sampled every 10 minutes over a total period of one year. We included this dataset in our experiments as, due to its simulated nature, we have a ground truth about the causal relationships which we can use to investigate the quality of the causal discovery, as well as to measure the performance of the CTGCN on the downstream forecasting task. The dataset is further strongly heterogenous with datapoints representing sensors with different characteristics and value ranges. It is known that this is a challenging task for GCNNs (Chen et al., 2016; Chen et al., 2016), and may also adversely effect the causal discovery, rendering this a particularly interesting study. The ground truth adjacency matrix is defined based on the underlying physics of the simulation (Zhou et al., 2017), and contains 50 relationships between the 30 sensors. For this dataset we evaluate performance for both temporal and spatiotemporal decomposition in terms of the accuracy of causal discovery, the performance of forecasting the temperature of one of the rooms, and the runtime. Our temporal decomposition consists of a daily temporal split with \(P_{T}=144\) based on auto-correlation analysis, yielding 365 causal sub-problems. We select one hour (\(\tau_{max}=6\)) as the maximal time lag based on system dynamics. For the spatial decomposition we use 10 clusters after running an elbow test. ### Highway Traffic Flow We further verify our method on homogeneous real-world traffic flow data collected by the California Department of Transport. The data consists of 30-second traffic flow speed measurements collected from 228 locations across the California state highway system over weekdays in May and June 2012. We downsample the data to five-minute intervals to align with work by Yu et al. (Yu et al., 2017) and facilitate comparison. In that work, Yu et al. also proposes a distance-thresholding mechanism to identify the adjacency matrix \[w_{ij}=\begin{cases}exp\left(-\frac{d_{i,j}^{2}}{\sigma^{2}}\right),&\text{if }i \neq j\text{ and }exp\left(-\frac{d_{i,j}^{2}}{\sigma^{2}}\right)\geq\epsilon;\\ 0,&\text{otherwise}.\end{cases} \tag{5}\] We evaluate the effectiveness of our method against their results using their parameters \(\sigma^{2}=10\) and \(\epsilon=0.5\). To overcome the non-stationarity of the data, we split the data by date, conducting causal discovery for each of the 44 days of data. We also conduct spatial decomposition with 25 clusters estimated using an elbow test. The maximal lag we consider is \(\tau_{max}=9\) to align with Yu et al. (Yu et al., 2017). For this dataset, we demonstrate the runtime improvement of spatiotemporal decomposition over temporal, and evaluate the compromise between runtime and prediction accuracy. ## 5. Results ### Building Heating System We first compare the ground-truth with the temporal TGCN and the spatiotemporal (DTW clustered) TGCN approach with different aggregation strategies. Table 1 shows the precision, accuracy and F1 score for the different approaches outlined in Section 3.5. The accuracy of all approaches is higher than 83 % due to a high true negative rate as the association matrix is sparse, hence precision is a more relevant metric. Precision is only 14.2 % to 16.6 % for the temporal approach, which may be surprising as this approach should be able to discover all potential causal relationships. However, this results in a high false positive rate with about 150 causal relationships discovered. The spatiotemporal approach has a significantly higher precision with 32.8 % to 39.2 % discovering an average of 25 relationships. Despite this, the method is not able to discover all relationships due to the heterogenous nature of the dataset and our spatial decomposition. We see that data are grouped semantically by the DTW clustering step, e. g. separate clusters are created for temperature, CO2, occupancy and humidity. As our algorithm does not consider causal relationships to exist between clusters, we miss these relationships (See Appendix C for details). Nonetheless, the spatiotemporal approach has a higher accuracy, precision and a significantly lower compute time. The temporal approach took 288 hours (12 days) to compute, while the spatiotemporal approach computed in 49 hours (2 days, see Table 3). The matrix aggregation step also improves the result. One performance improvement approach could be to sample individual days from the dataset to estimate the adjacency matrix. This approach (AVG) has a mean precision of 32.8 %. But, analysing the full dataset and applying the Majority Threshold (MT) or the ANY aggregation we improve precision to 38.4 % and 39.2 %, respectively. We further compare the prediction performance of the discovered adjacency matrix to the ground truth when forecasting the room temperature. This has multiple causal relationships and relevant practical applications in energy management. As detailed in Section 3.6, we utilise a CTGCN that is configured with the different association matrices resulting from the temporal and spatiotemporal clustering and the different aggregation methods for prediction. Table 2 compares the results. We provide also the RMSE for an unconnected and fully connected TGCN for context. The RMSE for the ground truth prediction is significantly better than the unconnected and fully connected TGCN demonstrating the well-known benefits of using domain knowledge in TGCN configuration. The unweighted temporal and spatiotemporal CTGCN outperform the unconnected TGCN but not the ground truth. This is expected given that not all causal relationships were discovered. However, the temporal and spatiotemporal CTGCN is able to outperform the ground truth TGCN when weighted by frequency. The frequency clearly encodes important information about the relevance of a causal relationship that is not contained in the binary ground truth, and these weights allow the TGCN to compensate for the missing edges. In result, the spatiotemporal weighted majority threshold had the lowest RMSE, showing that our CTGCN can outperform the ground truth in a prediction problem. \begin{table} \begin{tabular}{l l l l} \hline \hline Approach & Precision & Accuracy & F1 \\ \hline Temporal AVG & 14.2 \% & 83.1 \% & 14.1 \% \\ Temporal MT & 16.6 \% & 84.7 \% & 15.2 \% \\ Temporal ANY & 15.2 \% & 83.9 \% & 14.6 \% \\ \hline Spatiotemporal AVG & 32.8 \% & 88.2 \% & 21.9 \% \\ Spatiotemporal MT & 38.4 \% & 88.8 \% & 26.3 \% \\ Spatiotemporal ANY & 39.2 \% & 88.8 \% & 28.2 \% \\ \hline \hline \end{tabular} \end{table} Table 1. Results for the building dataset. ### Highway Traffic Flow The mean runtime of each day of the temporally-decomposed traffic data was 35 hours, corresponding to an estimated total runtime of more than 64 days (Table 3). As such, causal discovery was not conducted for every temporal split with this method. Instead, we selected the first, last, and central five days of data to get an overview of the entire period and include variations in causal relationships over time. Forecasting performance was measured across the entire dataset to test the representativity of the causal discovery method when forecasting on unseen data. As evidenced in Fig. 2 and Appendix Table B4, downstream performance is significantly improved by introducing the temporally decomposed causal adjacency matrix into the TGCN architecture. Forecasting RMSE was on average 30.4% improved over the benchmark, with the best-performing and worst-performing days yielding 33.5% and 23.6% improvements respectively. To combine the causal graphs calculated from different days of data, the majority threshold aggregation method was used. This aggregation improves the performance of the temporal decomposition, resulting in an RMSE 36.0 % lower than the benchmark, and lower than any of the daily graphs. This result is notable as it demonstrates that by combining causal graphs from different sections of the data we can better capture the overall causal relationships. To compare our results to a state-of-the art approach, we added the STGCN results from Yu et al. (Yu et al., 2019). Note that the RMSE of the same distance based benchmark is higher than the STGCN due to our simpler sequence-to-sequence learner. But, the improved adjacency matrix of the Temporal MT graph outperforms even the more advanced STGCN. As shown in Table 3, introducing spatial decomposition accelerates causal discovery on this data by more than 14 times. Despite this, Fig. 2 demonstrates that predictive performance was not negatively affected, with performance improving in some cases. The minimum, mean and maximum improvement over the benchmark are 26.1%, 33.5% and 37.4% respectively. Spatiotemporal performance is further improved through MT aggregation, with RMSE 39.1% lower than the benchmark and 4.9% lower than Temporal MT. \begin{table} \begin{tabular}{l c} \hline \hline Approach & RMSE \\ \hline Ground Truth TGCN & **0.8297** \\ \hline Unconnected TGCN & 0.8981 \\ Fully connected TGCN & 0.8548 \\ \hline Temporal MT Unweighted CTGCN & 0.8668 \\ Temporal MT Weighted CTGCN & 0.8469 \\ Temporal ANY Unweighted CTGCN & 0.8735 \\ Temporal ANY Weighted CTGCN & **0.8250** \\ \hline Spatiotemporal MT Unweighted CTGCN & 0.8668 \\ Spatiotemporal MT Weighted CTGCN & **0.8209** \\ Spatiotemporal ANY Unweighted CTGCN & 0.8306 \\ Spatiotemporal ANY Weighted CTGCN & 0.8735 \\ \hline \hline \end{tabular} \end{table} Table 2. Comparison of the prediction accuracy of different aggregation methods for the building dataset. \begin{table} \begin{tabular}{l c c c} \hline \hline Dataset & Temporal & Spatiotemporal & Factor \\ \hline Building & 287.7 h & 49.8 h & 5.8 x \\ Traffic & 1540.0 h & 106.2 h & 14.5 x \\ \hline \hline \end{tabular} \end{table} Table 3. Comparison of causal discovery runtimes in hours for the datasets. ## 6. Discussion Our experiments have shown that using an automatically discovered causal adjacency matrix with a TGCN architecture can improve the forecasting performance over typical spatial approaches. Further, we demonstrate that through context-aware decomposition of the causal discovery problem we accelerate the discovery process such that previously intractable problems can be computed on a single machine. This is particularly notable for the traffic dataset, which is accelerated 14 times when the problem is both spatially and temporally decomposed over a temporal decomposition. Importantly, our approach reports high predictive skill with a simplistic sequence-to-sequence learner. Due to the lightweight architecture--constructed of a simple 1-D convolution and a graph convolution layer--we posit that the predictions from this model are more interpretable than more complex SOTA architectures (Srivastava et al., 2017). Simple weight visualisation or gradient based approaches can provide efficient insight into model performance for such 1-layer systems (Kang et al., 2018). A notable advantage of the proposed framework is that a causal model accurately representing underlying system dynamics allows us to retain performance with the lightweight architecture. Causal discovery can generate a more practical representation of latent and contemporaneous relationships. The power of GNNs has been demonstrated across many applications in recent years such as estimating travel times in Google Maps, powering content recommendations in Pinterest, and providing product recommendations in Amazon (Srivastava et al., 2017). We propose an agnostic and scalable framework for effective graph generation for such applications. We also illustrate that the method used to aggregate causal results from sub-problems is important, with the weighted majority threshold method yielding performance improvements over any of its constituent graphs. We posit that this is due to its ability to capture information about the relevance of most common causal relationships in the data and filters non-stationary relationships. However, our results are not exhaustive: we suggest that by implementing a more balanced clustering method, we might see improved precision. Also, the scalability remains a problem for very large datasets (\(N>>1000\)) with dense Figure 2. Forecasting RMSE on the traffic dataset compared to STGCN (Srivastava et al., 2017) causal models where the spatial decomposition capability is limited. There also remains opportunity to expand this work toward the explainability of the CTGCN model. ## 7. Conclusion While TGCN demonstrates excellent forecasting skill across a wide variety of applications, the need for manually configured adjacency matrices hinder large-scale deployments. We demonstrate that causal discovery methods generate a more robust graph structure that better capture system dynamics, and demonstrate improved predictive skill on two real-world datasets and in comparison to state of the art. We address computational scalability and non-stationarity by implementing efficient spatial and temporal decomposition. We show that CTGCN demonstrates particularly impressive performance when data is decomposed based on stationarity, causal relationships are calculated for each sub-problem, and aggregated to incorporate these temporal dynamics. Future work in this space is in the direction of improving the clustering method, automatic parameter identification, and an evaluation of explainability.
2305.12824
FieldHAR: A Fully Integrated End-to-end RTL Framework for Human Activity Recognition with Neural Networks from Heterogeneous Sensors
In this work, we propose an open-source scalable end-to-end RTL framework FieldHAR, for complex human activity recognition (HAR) from heterogeneous sensors using artificial neural networks (ANN) optimized for FPGA or ASIC integration. FieldHAR aims to address the lack of apparatus to transform complex HAR methodologies often limited to offline evaluation to efficient run-time edge applications. The framework uses parallel sensor interfaces and integer-based multi-branch convolutional neural networks (CNNs) to support flexible modality extensions with synchronous sampling at the maximum rate of each sensor. To validate the framework, we used a sensor-rich kitchen scenario HAR application which was demonstrated in a previous offline study. Through resource-aware optimizations, with FieldHAR the entire RTL solution was created from data acquisition to ANN inference taking as low as 25\% logic elements and 2\% memory bits of a low-end Cyclone IV FPGA and less than 1\% accuracy loss from the original FP32 precision offline study. The RTL implementation also shows advantages over MCU-based solutions, including superior data acquisition performance and virtually eliminating ANN inference bottleneck.
Mengxi Liu, Bo Zhou, Zimin Zhao, Hyeonseok Hong, Hyun Kim, Sungho Suh, Vitor Fortes Rey, Paul Lukowicz
2023-05-22T08:34:41Z
http://arxiv.org/abs/2305.12824v1
FieldHAR: A Fully Integrated End-to-end RTL Framework for Human Activity Recognition with Neural Networks from Heterogeneous Sensors ###### Abstract In this work, we propose an open-source scalable end-to-end RTL framework FieldHAR, for complex human activity recognition (HAR) from heterogeneous sensors using artificial neural networks (ANN) optimized for FPGA or ASIC integration. FieldHAR aims to address the lack of apparatus to transform complex HAR methodologies often limited to offline evaluation to efficient run-time edge applications. The framework uses parallel sensor interfaces and integer-based multi-branch convolutional neural networks (CNNs) to support flexible modality extensions with synchronous sampling at the maximum rate of each sensor. To validate the framework, we used a sensor-rich kitchen scenario HAR application which was demonstrated in a previous offline study. Through resource-aware optimizations, with FieldHAR the entire RTL solution was created from data acquisition to ANN inference taking as low as 25% logic elements and 2% memory bits of a low-end Cyclone IV FPGA and less than 1% accuracy loss from the original FP32 precision offline study. The RTL implementation also shows advantages over MCU-based solutions, including superior data acquisition performance and virtually eliminating ANN inference bottleneck. FPGA, Sensor Fusion, Human Activity Recognition, Neural Networks ## I Introduction Human activity recognition (HAR) is an application-oriented discipline that focuses on developing systems capable of inferring the semantic context of human activities from information sources such as sensors using machine learning (ML) algorithms [1, 2]. HAR has become increasingly relevant with the rise of smart devices, services, and systems, as it enables tailored and context-aware services. Sensor-based HAR often utilizes ML algorithms, such as pattern recognition and artificial neural networks (ANN), to associate sensor signals with physical activities. As elaborated in Section II-A, the complexity of the real world has resulted in the multi-modal, multifaceted, and temporal-sensitive nature of HAR applications. Complementary sensing modalities and sensor fusion are commonly used in HAR to account for the unique sensor outputs associated with different physical activities [2, 3]. As human activities are composed of complex sequences of motor movements, capturing these temporal dynamics with a stable high sampling rate is fundamental in HAR [4]. With the growth of smart wearable and home devices, HAR has gained interest in edge computing systems, where sensor data acquisition (DAQ), processing, and ML prediction are performed on embedded processors. However, while many HAR methodologies have shown promise in offline studies involving heterogeneous sensing of high data quality, few have been transitioned to edge devices for run-time inference in the field, and most are restricted to limited sensors, such as inertial measurement units (IMUs). Current microprocessor (MCU) architectures struggle with maintaining a high sampling rate or data throughput when more sensor instances, modalities, or larger ML models are deployed to the workload of the same processor. As most ML algorithms in HAR are temporal sensitive, maintaining stable sampling rates independent of these system expansions is a basic requirement for run-time HAR systems. As MCUs execute sequential instructions, increasing sensors may also introduce lag between modalities and simultaneous data collection cannot be guaranteed, which may further negatively impact the recognition result and even lead to catastrophic failure. Compared to MCUs, field programmable gate arrays (FPGAs) with many advantages including reconfigurability and parallelism, which support hardware-algorithm co-optimization, have become an interesting embedded platform candidate for complex run-time HAR systems [5]. For relatively small systems, FPGAs can also contain all data on-chip, eliminating the bottlenecks of moving data between the off-chip memory [6]. However, the knowledge barriers between HAR data science and hardware-specific FPGA application development have so far hindered more edge implementations of complex HAR methodologies, even with available high-level synthesis (HLS) tools [7]. To overcome these limitations, we propose a fully integrated Register Transfer Level (RTL) end-to-end framework, named FieldHAR, that covers the entire HAR pipeline from DAQ by heterogeneous sensors to activity prediction by ANNs. With FieldHAR, the embedded system can reach high performance in both DAQ and ANN inference throughput independent from any system extensions. In summary, we developed FieldHAR with the following contributions: 1. An end-to-end framework from sensor inputs to ANN model activation fully integrated into FPGAs. The framework includes a scalable heterogeneous parallel sensor interface that guarantees the sampling rate and RTL implementation of the ANN model. 2. An ANN model designed for scalable heterogeneous temporal data based on branched convolutional neural networks (CNNs) for sensor fusion and RTL microarchitectures optimized for its inference. 3. Validation with a kitchen scenario HAR application which was demonstrated in an offline study [8]. Through resource-aware optimizations and performance evaluations, we demonstrate the effectiveness of FieldHAR in transforming complex offline HAR methodologies to run-time edge systems. ## II Related Work ### _Sensor-based HAR Methodologies_ In recent years, there has been a considerable amount of work on sensor-based HAR. The IMU is one of the most commonly used sensors in HAR applications [9, 10, 11]. Ronao and Cho [12] proposed using CNNs to leverage the intrinsic properties of human activities and time-series signals from the accelerometer and gyroscope on a smartphone. Their approach enables efficient, effective, and data-adaptive recognition of human activities. Apart from the IMU sensors, the electric field-based sensor is also explored in the HAR task. Bian et al. [13] developed a human body capacitive-based sensor with microwatt-level power consumption to recognize and count gym workouts, which achieved an average counting accuracy of 91%. Cheng et al. [14] used conductive textile-based electrodes to measure changes in capacitance inside the human body, by which the human activities, such as chewing, swallowing, speaking, signing (taking a deep breath), as well as different head motions and positions, can be recognized. The concurrent use of multiple sensing modalities enjoys many advantages over a single modality [2], like better robustness and more complex information extraction. For example, motion-related activity is usually recognized by analyzing IMU time series; the human physiological information like heart rate, respiratory, and emotion can be extracted by bio-signals like ECG and EEG within a time window. Thus, multi-modalities sensing and sensor fusion in HAR have become a popular research direction. Zhang et al. [15] designed a necklace using multiple sensor data from a proximity sensor, an ambient light sensor, and an IMU sensor to detect chewing activity and eating episodes. Bharti et al. [3] proposed a multi-modal and multi-positional system called "HuMan" to recognize and classify the 21 complex at-home activities of humans with results up to 95%. The system consists of practical feature set extraction from specifically selected multi-modal sensor suites, a novel two-level structured classification algorithm that improves accuracy by leveraging sensors in multiple body positions, and improved refinement in the classification of complex activities with minimal external infrastructure support. Although many proposed HAR methodologies have demonstrated remarkable performance based on the multiple sensing modalities and efficient neural networks, most of them still stay in offline evaluation on general-purpose computing hardware and lack evaluation of real-world real-time inference on edge devices. ### _Field Implementations of HAR Applications_ Field implementations of HAR Applications are crucial for a truly pervasive solution bridging the gap between HAR research and real-world adaptation. Although supporting such AI applications on mobile and embedded hardware that is ubiquitous across consumer devices poses important challenges [5], with the help of the growing ANN frameworks for MCU-based hardware platforms like TensorFlow Lite Micro [16], MicroTVM [17], CMix-NN [18], CMSIS-NN [19], and STM X-Cube-AI [20], more and more works for real-time HAR on MCU-based edge devices have been presented [21, 22, 23]. For example, the work [23] developed a capacitive-sensing wristband that utilizes four single-end electrodes for onboard hand gesture recognition. By deploying a single convolutional hidden layer as the classifier on the Arduino nano sense platform with a 64 MHz CortexM4 MCU integrated with an FPU, 1 MB flash, 256 KB RAM, this wristband can accurately identify seven hand gestures from a single user with 96.4% accuracy in real-time. However, the MCU hardware resource constraints often limit more sophisticated implementations from many aspects including data throughput, selection of sensor modalities, and ANN complexity, which are all proven important in offline HAR studies as mentioned in Section II-A. Compare to MCUs, the parallel data processing capability, flexible data representation, and reconfigurability of FPGAs have attracted the attention of many researchers as an alternative hardware platform for field implementations of HAR applications. Generally, FPGAs provide higher energy efficiency than GPUs and higher performance than CPUs [24]. Existing studies mainly focused on deploying the neural networks on FPGA efficiently [25, 26] or designing a hardware architecture with uniform modality sensing input [27], the former usually requires additional data reading devices, the latter lacks flexibility. The work SensorNet [28] also proposed a scalable and low-power embedded CNN for multi-channel time series signal classification, time series from multiple channels were converted to a 2D array, and then the 2D deep CNN was applied to extract features and classify the activities, this architecture can only support sensor fusion from data input level which is not optimal for heterogeneous sensors. On the other hand, data acquisition from heterogeneous sensors is a complex task crucial for providing high-quality data input for the ANNs, and thus shall not be overlooked. Yet most field implementation studies focus on efficient ANN execution with hardware accelerators [29]. To the best of our knowledge, our FieldHAR framework is the first complete end-to-end architecture that includes from heterogeneous sensor data acquisition to data processing with ANNs designed for heterogeneous sensor fusion on FPGAs for HAR applications. ## III Framework Structure This framework includes not only a sensor driver hardware library to support flexible extension, rapid implementation, and synchronous sampling at the maximum rate of each sensor but also an adaptive integer-based multi-channel branched CNN that supports both data fusion and feature fusion architecture. The open-sourced framework is described by SystemVerilog without using any proprietary IP cores. Therefore, it supports flexible migration between different FPGAs or ASICs. Fig. 1 illustrates the high-level block diagram of our proposed end-to-end RTL framework, which mainly comprises three primary modules: the scalable sensor interface, top controller module, and ANN inference module. Our RTL framework's design is guided by the following objectives: * Fully integrated end-to-end RTL framework: It includes both data acquisition and ML, including automatic feature extraction from heterogeneous sensor data and human activity classification. * Flexibility and scalability: It supports further heterogeneous sensors integrated into the framework easily. * Resource-efficiency: It supports hardware-algorithms co-optimization to achieve high resource efficiency. ### _Scalable Sensor Interface_ Fig. 2 shows the architecture of the scalable sensor interface, which is consisted of three levels: peripheral driver level, sensor driver level, and data level. At the peripheral driver level, the peripheral driver module directly connects to the sensor and implements the peripheral interface protocol. To this end, FieldHAR supports Inter-Integrated Circuit (I2C) and Serial Peripheral Interface (SPI) bus, which are the two primary peripheral protocols used in commercial sensors for HAR. The sensor driver level performs a function similar to that of the sensor software driver library, which involves two state machines. The first state machine controls data transactions between the I2C/SPI master control modules and the sensors, including single-byte read/write and multiple-byte read/write operations. The second state machine completes register operations of sensors, such as control register configuration and sensor status/data registers read. As different sensors have distinct register address maps and operation flows, users need to reorder the state transitions and redefine the registered address in the package file when integrating a new sensor into the framework. The retrieved sensor data is pushed into the data-level FIFO, and a start signal from the top controller module synchronizes data reads across multiple sensors. The depth corresponds to the time steps. ### _Top Controller Module_ FieldHAR's workflow is managed by the top controller module with three components: * The sensor controller ensures simultaneous operations among different sensor interfaces. * The data stream controller combines the heterogeneous sensor FIFOs with different sampling rates to a single sensor data RAM. * The interface controller handles ANN activation upon the sensor data RAM ready signal from the data stream controller, and interfaces with external devices via a UART interface, including receiving start/stop commands, sending out inference results or sensor data. In HAR, sliding window is the common approach as there are typically no clear signs of the start and stop of activity instances. This is implemented with the sensor data RAM so that the window size and step are independent from the individual sensor FIFOs. ### _Neural Networks Inference Module_ The ANN inference module is specially designed for a quantized branched CNN feature fusion model native supporting heterogeneous sensors as later discussed in Section IV-B. As shown in Fig. 3 it comprises a convolution layer module for feature extraction, a dense layer module for classification, and ANN architecture controller. The ANN model is effectively Fig. 1: The overall structure of FieldHAR Fig. 2: Architecture of the parallel sensor interface stored in the weight ROM, and the feature RAM facilitates run-time calculation. Both the convolution and dense layers consist of a _Weight Read State Machine_ and a _Feature Read State Machine_ to prime the multiply-accumulate unit (_MAC_) for matrix multiplication. A quantization (Q) module handles the output requantization required in quantized on-device inference [30]. The non-linear activation (ReLU in this case) is folded inside the Q module. The convolution layer has an additional shift \(S\) operation and counter \(C\) module to facilitate the stepped operation of kernel convolution. Max pooling (M) of the same kernel size, or global max pooling, is also folded inside the convolution layer by comparators to better utilize the stepped operation, selectable by a multiplexer. The convolution kernel size and output channels are implemented in parallel, so the convolution operation scales linearly with input channels. While the input channels can also be paralleled at the cost of channel-times the resource, our evaluation results in Section IV-E show that the current convolution layer implementation is already providing negligible inference time in HAR applications. Thus we decide to trade input channel parallelism with hardware resources for more ANN model complexity. To achieve efficient on-chip memory utilization, resource-aware ANN optimization is applied to reduce the required memory size of the neural networks. Firstly, as dense layers take the majority of trainable parameters, using only two dense layer with a small input size after several pooling operations reduces the model size. Secondly, quantization [30] is applied to reduce the parameter precision and thus data buffer size with negligible performance loss. Thirdly, inspired by the work in [31], the bias in the ANN is removed through tensor normalization, which further reduces the trainable parameters. These techniques collectively reduce the memory requirements, leading to lower energy consumption and latency by avoiding off-chip memory access during model inference [6]. ## IV HAR Application-Specific Evaluation ### _Kitchen Activity Recognition Example_ Monitoring human activity in the kitchen can provide valuable information for improving people's health and well-being. By tracking activities such as meal preparation, cooking, and eating, a system can provide personalized advice and guidance to promote healthy eating habits. Additionally, monitoring activity in the kitchen can also provide useful information for elderly care, as it allows caregivers to monitor eating patterns and ensure that individuals are receiving adequate nutrition. Overall, the kitchen is a critical research area for human activity monitoring and has the potential to improve health outcomes and quality of life. Thus, a kitchen HAR dataset with multiple sensors acquired from [8] was selected as the ANN training dataset to evaluate the proposed framework. The kitchen HAR dataset is recorded by a DAQ module with six sensors (listed in Table I) driven by 2 MCUs. It contains ten types of kitchen-related activities shown in Table II performed by ten subjects wearing the DAQ on the chest. In total, there are 791 channels of sensor data with different sampling rates. After synchronization and interpolation, the equivalent sampling rate is 6 Hz (downsampled from 12Hz). ### _ANN and Sensor Fusion for HAR Task_ To classify kitchen activities using data from multiple sensors, two sensor fusion methods were employed in the design of neural networks: data fusion and feature fusion architectures, as depicted in Fig. 4. The data fusion architecture is similar to that of SensorNet [28], where time series data from various sensors are concatenated into a two-dimensional matrix (_i.e._\((W\times C)\), where \(W\) and \(C\) denote the size of the sliding window and the number of sensor channels, respectively) that inputs to a single neural network branch Fig. 3: Block Diagram of the ANN Inference Module (**MAC**: multiply-accumulate unit; **C**: Counter; **Q**: Quantization; **S**: Data Shift; **M**: Global or Kernel Max-pooling) directly, allowing for simultaneous capture of correlations between various modalities. When the connected sensors are heterogeneous, for example if one sensor has only one channel while another has several hundred channels, the resulting neural network model may be dominated by the sensor with more channels, leading it to ignore the impact of the sensors with fewer channels or not learn from them at all. The feature fusion method, on the other hand, uses separate branches of convolution layers extracting features from each sensor. Thus the imbalanced influence of sensors can be mediated by ensuring similar number of output features per modality. The extracted features from each sensor are then concatenated before being fed into the dense layers for classification. Feature fusion has shown better accuracy in the literature [32], which is also reflected in our evaluation. An offline experiment with the training data was conducted where two models based on the data fusion and feature fusion methods were built, the result of which are presented in Table III. Both models extracted the same number of features, kernel size, and dense layer. Thus, the feature fusion architecture was selected for this kitchen activity recognition task, as it offers a higher recognition accuracy with much fewer trainable parameters. In the feature fusion model, data from different sensors were handled by independent feature extraction layers, as shown in the bottom half of Fig. 4. Each feature branch has three convolution layers with the same filter channels and kernel size, followed by a global max-pooling layer to reduce the temporal dimension to 1. Then the features are concatenated and fed to two dense layers for classification. The softmax activation function of the last dense layer was replaced by a function that outputs the index of the largest output value when deploying this model on the FPGA, which can avoid implementing a division operation on the hardware. The ReLU activation function is used for the rest of the layers. The filter channels and kernel sizes are hyperparameters that can be adjusted to balance between recognition accuracy and model size. For each sensor, independent normalization was applied to rescale the data input range between -1 to 1, which is also prepared for the later quantization step. The ANN model was built under TensorFlow 2.10.0 framework, and the model training process was performed on a laptop with the GeForce RTX 3080 Ti GPU. The sparse categorical cross entropy was used as the loss function. ### _Resource-aware Optimizations_ To facilitate efficient ANN deployment onto the FPGA, optimization techniques are employed to reduce the memory and operation footprint of the ANN inference module, including removing less relevant modalities and ANN quantization. #### Iv-C1 Modality Selection In HAR applications with heterogeneous sensor, it is important to select the modalities that contribute most for the task, as redundant or irrelevant sensors result in unnecessary computational overhead and larger model size. To accomplish this, we proposed a method to search for important sensors. In the feature fusion model, there are \(n\) parallel feature branches for \(n\) sensors, as illustrated in Fig. 4. Each feature branch outputs a \(1\times 8\) feature tensor, which we denote as \(F_{i}\). We then assign each sensor a trainable weight \(\alpha_{i}\) that reflects its importance to the classification task. These weights are multiplied with corresponding features from each sensor's feature branch and accumulated into a single tensor, \(F_{mix}\), for the final classification. \[F_{mix}=\sum_{i=1}^{n}\frac{\exp{\left\{\alpha_{i}\right\}}}{\Sigma_{j=1}^{n} \exp{\left\{\alpha_{j}\right\}}}F_{i} \tag{1}\] After the training, we can remove the less useful sensors according to \(\alpha_{i}\) and retrain the model with only the useful sensors without \(\alpha_{i}\). For the specific heterogeneous dataset, the IMU data were divided into two categories: motion-related data (accelerator and gyroscope) and magnetic data. Besides, 2D convolutions were used to extract the feature from the Thermal IR array as it is analogous to a thermal camera. 1D convolutions were used for the data from the remaining sensors. Table IV shows the sensor importance factor of the training dataset. To validate the modality selection method, five sensor modality sets were created, where we remove one additional sensor per iteration according from the bottom of the \(\alpha_{i}\) ranking. Fig. 5 presents the influence of different sensor modalities on recognition results and model size. Despite having less input information, removing the two most insignificant sensors with Set B and C, has even slightly Fig. 4: The neural architecture of the HAR task with multiple sensor inputs (Global Maxpooling was used). accuracy compared with the full modality Set A. We find the most cost-effective combination to be Set D with four most significant sensors: the ANN recognition accuracy has a slight decline of around 1%, while the number of trainable parameters was reduced to almost 1/3 of the full set. #### Iv-A2 Post Training Quantization (PTQ) ANN quantization is an effective method for reducing both the model size and computation cost, by which the memory requirement and power consumption of the model during inference can be decreased. PTQ specifically does not require retraining the model and thus can be easily adapted. Reducing the precision from 32-bit to 8-bit could decrease memory resources by a factor of 4 and matrix multiplication cost by a factor of 16 [30]. Given the large number of multiplications and values that need to be stored, such resource savings are crucial when operating CNNs on small or battery-powered edge devices. RTL implementations on FPGAs provide even more flexible bit precision options, while MCU-based architectures are usually limited to predefined precision like INT8 or INT16. PTQ was performed after modality selection, which resulted in a CNN model with four modalities and feature branches. To find the optimal bit precision, the CNN model was quantized post-training from a FP32 model to n-bit fixed-point integer following the methods in [33] with adjustments on tensor normalization to facilitate the branch concatenation of our CNN model. The normalization coefficient in the convolution layers was calculated by Eq. (2) in our work: \[R_{l}=\max(|W_{l,0}|,|O_{l,0}|,|W_{l,1}|,|O_{l,1}|...|W_{l,i}|,|O_{l,i}|) \tag{2}\] where \(l\) indicates the CNN layer, \(i\) indicates the feature extraction branch, \(W\) denotes the weights and \(O\) denotes the outputs from the corresponding CNN layer. As there were three CNN layers from each feature extraction branch, three rescale coefficients \(R_{l},l=(1,2,3)\) were calculated iteratively. This arrangement is to ensure the layer-wise scaling does not change the weight distribution before the concatenation layer. For the dense layer after concatenating the branched features, normal quantization scaling was performed according to related works [30, 33]. Then, the updated weights in fixed-point integer format were calculated according to the symmetric quantization method explained in [30] by Eq. (3): \[W_{int}=\lfloor\frac{W_{l}}{R_{l}}\times 2^{n}\rceil \tag{3}\] where \(\lfloor\cdot\rceil\) is the operator for rounding to the nearest integer. \(n\) is the quantized bit precision. To evaluate the performance of the model with different quantization bit precision, we use the quantized accuracy / FP32 accuracy as a metric shown in Fig. 6. The result indicates that the weight with 10-bit precision can achieve the same accuracy as FP32, and further reducing the bit precision will cause accuracy degradation. Although with as low as 7 bits, the accuracy loss of 3% is still acceptable, the model of 10-bit precision can already be comfortably fit inside our selected FPGA hardware resource as discussed in Section IV-E. Thus, the feature fusion neural networks for the kitchen activity recognition task were converted to a 10-bit fixed point format except for one sign bit (signed 11-bit integer). ### _Parallelism in Model Inference_ The branched feature fusion CNN architecture provides further parallelism potential. Since each branch is bound to one sensor data source and is independent of each other until the concatenation layer, concurrent computation among these branches can further reduce inference latency. In addition, as mentioned in Section III-C, the convolution layers in this work are designed to leverage the output channel tiling technique as it was identified as the optimal form of parallelism, taking into account both I/O memory bandwidth and computational load, based on the computation-to-communication (CTC) ratio [34]. ### _Hardware Implementation Results and Discussion_ The FieldHAR framework with the kitchen scenario application was implemented on an Intel FPGA Cyclone IV EP4CE22F17C8. After optimization, the system has four sensor modalities and the PTQ is set to signed 11-bit integer. Two types of inference hardware architecture were implemented based on different task schedules: serial and parallel. In the serial implementation, feature branches were executed sequentially, while in the parallel implementation, all feature branches were performed in parallel. The hardware architecture was described using System Verilog HDL, and the clock frequency was chosen as 100 MHz. Table V shows the implementation results for different bit-precision and architectures, indicating a significant impact of the number of precision bits on hardware performance. The hardware implementation must have at least an 11-bit precision (including 1 sign bit) to match the FP32 model accuracy as shown in Fig. 6. The required logic elements and total memory bits scales almost linearly with the bit precision, showing the flexibility of FPGAs in quantization bit Fig. 5: Influence of sensor modalities on recognition results and model size. **(Set A:** includes all seven sensors; **Set B:** removed Barometric sensor; **Set C:** removed Barometric and Gas sensors; **Set D:** removed Barometric, Gas, and Thermal IR sensors; **Set E:** removed Barometric, Gas, Thermal IR array, and ToF Range sensors) in Section IV-C2. However, the multipliers doubled from 9-bit to 11-bit, because the input data width of the hardware-embedded multiplier is 9 bits on the selected FPGA; thus 11-bit operation requires two concatenated multipliers. As shown in Fig. 7, the inference speed has a close relationship with the task schedule strategies, in the serial ANN implementation, the latency of inference was 0.54 ms, while it can be reduced to 0.25 ms by the parallel implementation. The fastest throughput of the inference can be up to 4000 labels per second. However, the maximum sample rate of most sensors used in HAR is under 1000 Hz, and from the usecase consideration, most recognition for human activities at time window intervals of seconds is already considered fine granularity. Thus with FieldHAR we can consider the ANN inference is no longer a bottleneck in most HAR applications. Thus for this specific kitchen scenario application, the 11-bit Serial ANN implementation is already sufficient in terms of latency, while leaving more room for adding more modalities or more complex ANN models in the future. Serial ANN implementation has less power consumption than parallel implementation because the former design has less hardware resource occupation like logic elements and hardware multipliers. In general, the power consumption of the hardware implementation with different configurations (bit precision and parallelism) is under 140 mW, which is slightly more than an ARM Cortex M4 MCU but is suitable for battery-powered edge devices. ### _Further discussion and limitations_ From Fig. 7 we can see that the FPGA implementations of FieldHAR can guarantee existing DAQ operations if new tasks, either more sensors or ANN operations, are added, while MCU-based solution struggles in this respect as different tasks Fig. 6: The relationship between n-bit quantized accuracy with respect to the FP32 accuracy (without sign bit) Fig. 7: Comparison of the HAR Task Schedule between FPGA and MCU, all implementations correspond to 20 samples for the fastest sensor (119Hz possible on the FPGA in this work, and 12Hz possible on the MCU [8]) need to be scheduled with limited cores. Even if the tasks can be pipelined with more cores, the FPGA implementation also provides synchrony across modalities. The training dataset from [8] was limited by the MCU during data collection and thus is restricted to 12Hz taking 3.3s for a complete ANN input frame, while the FPGA implementation takes significantly less time (168ms) to collect the input frame. Even the slower serial ANN is no longer the bottleneck, with 0.54ms latency, 20% LE and 2% memory bits. Thus there is sufficient room for evaluating more complex ANN models with larger input frame with finer time granularity. Compared with related works with FPGA implementations like [28, 33, 35], FieldHAR is designed for heterogeneous sensor modalities with different sampling rates, from adaptable sensor interface, branched CNN model with feature fusion, to the optimization step of modality selection; whereas existing works are limited to uniform modality, thus not applicable for the growing sensor fusion based HAR methodologies [2]. However, the proposed version of FieldHAR to this end has several limitations. The ANN inference module is limited to convolution, max pooling, concatenation, and dense layers. Although multi-channel temporal convolution has proven effective in many HAR applications [2], there are also other ANN architectures, such as recurrent networks. The MCU-based platforms mentioned in Section II-B typically support broader selections of layers. However, they usually require specific MCU types while FPGA in this regard is more generic. While PTQ has already significantly reduced the hardware resource footprint of the ANN model, there are other methods such as quantization-aware training (QAT) that can improve prediction accuracy with lower bits at the cost of additional training for every bit precision. ## V Conclusion In conclusion, FieldHAR presents an end-to-end RTL framework for multi-modal HAR applications, integrating sensor DAQ and ANN model prediction into FPGAs. Both the DAQ and ANN modules are designed with modality-wise parallelism through concurrent sensor interfaces and branched CNN models. The proposed framework is evaluated with a sensor-rich kitchen HAR application scenario from a published offline HAR study. Through optimization steps of modality selection and PTQ, we derived a system with four sensors and signed 11-bit integer quantization precision with less than 1% accuracy loss from the full seven-modality FP32 model. FieldHAR accommodates the transitions of HAR methodologies which are usually limited with offline evaluations on general purpose computers, to online runtime applications on edge devices. The parallelism of FPGAs are especially beneficial for multi-modal applications in terms of throughput capability and system robustness against increasing modalities.
2304.05917
A Phoneme-Informed Neural Network Model for Note-Level Singing Transcription
Note-level automatic music transcription is one of the most representative music information retrieval (MIR) tasks and has been studied for various instruments to understand music. However, due to the lack of high-quality labeled data, transcription of many instruments is still a challenging task. In particular, in the case of singing, it is difficult to find accurate notes due to its expressiveness in pitch, timbre, and dynamics. In this paper, we propose a method of finding note onsets of singing voice more accurately by leveraging the linguistic characteristics of singing, which are not seen in other instruments. The proposed model uses mel-scaled spectrogram and phonetic posteriorgram (PPG), a frame-wise likelihood of phoneme, as an input of the onset detection network while PPG is generated by the pre-trained network with singing and speech data. To verify how linguistic features affect onset detection, we compare the evaluation results through the dataset with different languages and divide onset types for detailed analysis. Our approach substantially improves the performance of singing transcription and therefore emphasizes the importance of linguistic features in singing analysis.
Sangeon Yong, Li Su, Juhan Nam
2023-04-12T15:36:01Z
http://arxiv.org/abs/2304.05917v1
# A Phoneme-Informed Neural Network Model for Note-Level Singing Transcription ###### Abstract Note-level automatic music transcription is one of the most representative music information retrieval (MIR) tasks and has been studied for various instruments to understand music. However, due to the lack of high-quality labeled data, transcription of many instruments is still a challenging task. In particular, in the case of singing, it is difficult to find accurate notes due to its expressiveness in pitch, timbre, and dynamics. In this paper, we propose a method of finding note onsets of singing voice more accurately by leveraging the linguistic characteristics of singing, which are not seen in other instruments. The proposed model uses mel-scaled spectrogram and phonetic posteriorgram (PPG), a frame-wise likelihood of phoneme, as an input of the onset detection network while PPG is generated by the pre-trained network with singing and speech data. To verify how linguistic features affect onset detection, we compare the evaluation results through the dataset with different languages and divide onset types for detailed analysis. Our approach substantially improves the performance of singing transcription and therefore emphasizes the importance of linguistic features in singing analysis. \({}^{1}\)Gradeudate School of Culture Technology, KAIST, Daejeon, Republic of Korea \({}^{2}\)Institute of Information Science, Academia Sinica, Taipei, Taiwan singing transcription, onset detection, phoneme classification, music information retrieval ## 1 Introduction Note-level singing transcription is an music information retrieval (MIR) task that predicts attributes of note events (i.e., onset time, offset time, and pitch value) from audio recordings of singing voice. Although this task has been studied for a long time, the performance of singing transcription is generally inferior to those of other musical instruments such as polyphonic piano music [1, 2]. The lack of large-scale labeled datasets is one of the major technical barriers. In addition, singing voice has highly diverse expressiveness in terms of pitch, timbre, dynamics, as well as phonation of lyrics. For example, singing techniques such as vibrato, bending, and portamento make it difficult to find note boundaries and note-level pitches. This variability makes even manual note transcription by human experts difficult [3]. This in turn has resulted in the lack of high-quality labeled datasets. Another important characteristic of singing voice which is well distinguished from other instruments is that it conveys linguistic information through lyrics and this influences note segmentation. Given that most singing notes are syllabic (i.e., one syllable of text is set to one note of music) and melsimatic (i.e., one syllable is some with multiple notes), the relationship between the change of syllables and the change of notes is sophisticated. This makes certain kinds of note patterns of singing voice not seen in any other instruments. Therefore, we need to consider such linguistic characteristic in automatic singing transcription models. In this paper, we propose a neural network model that incorporates linguistic information into the input to improve note-level singing transcription for singing voice. Similar to earlier research, we use log-scaled mel-spectrogram as a primary input. In addition to that, we take phonetic posteriorgram (PPG) from a pre-trained phoneme classifier as the second input. As shown in Figure 1, PPG shows a pattern distinct from the ones of mel-spectrogram, and it can be noted that the transition pattern of PPG can better describe the onset event at 1.2 and 2.8 second. We propose a two-branch neural network model based on a convolutional recurrent neural network (CRNN) backbone to represent both of the input features effectively. In the experiment, we conduct an ablation study to examine the effectiveness of model design, mel-spectrogram, and PPG. Also, we compare the effects of mel-spectrogram and PPG on transition and re-onset, the two types of challenging onset events in singing transcription. Finally, we demonstrate that our proposed model outperforms a few state-of-the-art note-level singing transcription models, especially in terms of onset detection. ## 2 Related Works Traditional studies mainly used various types of spectral difference for onset detection of audio signals [4]. The spectral difference is particularly successful at finding percussive onsets but it performs poorly on expressive instruments that have soft onsets. Deep neural networks have been actively applied to singing voice as well. Nishikimi _et al_. [5] suggested an attention-based encoder-decoder network with long short-term memory (LSTM) modules. Fu _et al_. Figure 1: An example of singing voice: mel-spectrogram (top), piano roll with onsets and pitches of notes (middle), and phonetic posteriorgram (PPG) (bottom) from singing (phonemes with probability under 0.5 in this example were omitted). [6] proposed a hierarchical structure of note change states to segment singing notes and used multi-channel features to increase the performance. Hsu et al. [7] suggested a semi-supervised AST framework. More recently, [8] proposed the object detection-based approach to significantly improve the performance of singing voice onset/offset detection. While the majority of them relied on note onset and offset information from melody labels, one recent attempted to use phoneme information as part of input features for note segmentation [9]. However, the performance was not convincing. In this work, we present a neural network architecture to make an effective use of the phoneme information. ## 3 Proposed Method ### Model Architecture Our proposed model architecture consists of two branch networks and a single RNN with a dense layer as illustrated in Figure 2. One branch network takes log-scaled mel-spectrogram \(X\) and the other branch network takes phonetic posteriorgram (PPG) \(\hat{P}\) from a pre-trained phoneme classifier. Both of the branches are CRNN where CNN architectures are a modified version of _ConvNet_ proposed in [10], which is commonly used in the piano transcription task [1, 11]. To get the wider time-scale receptive field, we changed the first convolution layer with a dilated convolution with 2 dilation on the time frame axis. To predict the note events, we combined the two branch networks by concatenating the outputs and connecting them to an additional RNN layer and a dense layer. The output layer is represented with a 3-dimensional sigmoid vector where each element detects onset, offset, and activation as binary states. The activation indicates whether the note is on or off at each frame. ### Framewise Phoneme Classifier We extracted the phonetic information using a phoneme classifier which returns the output as a PPG. We implemented it using a single CRNN network with a dense layer. We used the original _ConvNet_ architecture for the CNN part. We tried two loss functions to train the phoneme classifier network. One is the framewise cross entropy loss, which is possible when we have time-aligned phoneme labels. Since it is difficult to obtain time-aligned phoneme labels in frame-level especially for singing voice, we also used the connectionist temporal classification (CTC) loss function [12] which can handle the alignment between the predicted phoneme sequence (\(\tilde{p}\)) and the ground truth phoneme sequence (\(S\)) which have unequal lengths. The CTC algorithm predicts phoneme sequences with inserted blank labels along the possible prediction paths \(\mathcal{B}\). Since the CTC loss function is optimized for predicting the entire sequence, the prediction pattern tends to be spiky and sparse and thus it does not find the boundaries of phonemes well [12, 13]. To solve this problem, we used two layers of bidirectional LSTM layers and a single dense layer that reconstruct the input log-scaled mel-spectrogram (\(\hat{X}\)). This was proposed to enhance the time alignment when the CTC loss is used [14]. For the reconstruction loss (\(\mathcal{L}_{\text{mega}}\)), we normalized the log-scaled mel-spectrogram from \(-1\) to \(1\) (\(X\)) and applied the \(\tanh\) function for the activation and used the \(L_{2}\) loss function. These loss functions are defined as: \[\mathcal{L}_{\text{CTC}} =-\log\sum_{\hat{p},\mathcal{B}(\hat{p})=p}\prod_{t=0}^{T-1} \mathbb{P}(\hat{p}_{t}|X)\,, \tag{1}\] \[\mathcal{L}_{\text{recon}} =\left\|\hat{X}-\hat{X}\right\|^{2},\] \[\mathcal{L}_{\text{PPG}} =\mathcal{L}_{\text{CTC}}+\mathcal{L}_{\text{recon}}\,,\] where \(T\) is the total number of time steps, \(p\) is the ground truth phoneme sequence and \(\mathbb{P}(\hat{p}_{t}|X)\) is the PPG at time \(t\). ### Label Smoothing Unlike other instruments, synthesized or auto-aligned onset/offset labels are hardly available in the case of the singing datasets [15]. In addition, since singing onsets are temporally soft, has a soft onset, to locate the exact onset positions of singing by means of with a waveform or mel-spectrogram is by no means straightforward. Such softness of the onset is one of the factors that makes the onset of singing voices more challenging to train. Previous frame-wise onset detection studies [6, 7] extended the duration of the onset label to solve this problem. Following these previous studies, we also used a smoothing method to increase the length of the onset and offset label. Specifically, we smoothed the 1-D one-hot onset label sequence \(y_{\text{sn}}:=y_{\text{on}}[n]\) (\(n\) denotes the time index) and the offset label sequence \(y_{\text{off}}:=y_{\text{off}}[n]\) through the linear convolution with a scaled triangular window function \(w_{\text{in}}[n]\) to improve the precision simultaneously. The scale factor of the triangular function \(N\) stands for the number of frames with nonzero values. To make the center of the label to \(1\) after the smoothing, we only used the odd numbers for the scale factor \(N\). The convolution process is represented as \[w[n] =\begin{cases}1-\left|\frac{n}{(N+1)/2}\right|&\text{if }|n|\leq\frac{(N+1)}{2}\\ 0&\text{otherwise.}\end{cases} \tag{2}\] \[y_{\text{on},s}[n] =y_{\text{on}}[n]*w_{\text{in}}[n]\] \[y_{\text{off},s}[n] =y_{\text{off}}[n]*w_{\text{in}}[n]\] where the operation \(*\) represents the linear convolution and \(n\) is the frame index. ### Note Decoding To find the positions of onsets from the prediction output, we set a constant threshold and set the frame with the maximal value above the threshold as the position of onset. When finding the offset of a note, we first find the offset candidates between the current onset time and the next onset time. The offset candidate is either the highest peak of the offset prediction or the time frame that the activation Figure 2: The proposed model architecture prediction goes lower than 0.5. If multiple offset candidates exist, we set the offset to the latest offset candidate. If no offset candidate is found, the offset of the note is set to the time frame of the next onset. The threshold of onset and offset is set to 0.2. In order to determine the threshold, we evaluated the validation set using a threshold ranging from 0.1 to 0.9 in increments of 0.1 to identify the optimal threshold. For note-level singing transcription, we estimated the note-level pitch from frame-wise F0s of the note segment to find the pitch of the note, following [6]. We extracted F0s with the PYIN algorithm [16], which is one of the most accurate pitch trackers. To compress the F0 contour to the note-level pitch, we used the weighted median algorithm, which finds the 50% percentile in the ordered elements with given weights. In this experiment, we use the normalized Hann window function with the same length of the note segment frames as the weight of the weighted median to reduce the influence of the F0 near the boundaries, which are the most expressive part. Since the sum of all weight values should be one, the Hann window function is normalized by dividing by the sum of the window elements. ## 4 Experiments ### Datasets We used SSVD v2.0 as the primary dataset [8]. It contains multiple sight-singing recordings, consisting of 67 singing audio files for the train and validation set, and 127 audio files for the test set. The human labeled annotations include onset, offset, and averaged note pitch. To use both phoneme and note labels given the audio, we also used the 50 songs in Korean from the CSD dataset [17], which have both note and phoneme labels of a female professional singer. Since the original note annotations of CSD was targeted for singing voice synthesis, we found it needs some refinement for the note transcription task. Thus, we re-annotated 50 songs of CSD for our experiment, following the rule suggested by [3]. The re-annotated label of CSD can be found on our GitHub page 1. The refined CSD is split 35, 5, and 10 songs for train, validation, and test set each. Footnote 1: [https://github.com/seyong92/CSD_reannotation](https://github.com/seyong92/CSD_reannotation) To train the phoneme classifier, we used TIMIT [18] which contains English speech with time-aligned phoneme labels for the model with SSVD v2.0. TIMIT contains 5.4 hours of audio of English speech. While training the phoneme classifier network, we reduced the phoneme types to 39 following the CMU pronouncing dictionary [19]. For the model with CSD, we used the unaligned phoneme label in CSD to train. To compare the transcription performance of the proposed model with previous work, we also used the ISMIR2014 [3] dataset, which contains 38 songs sung by both adults and children, as a test set. ### Evaluation Metrics We evaluated the models with the mir_eval library [20] for onset/offset detection and note-level transcription. We used the metrics proposed in [3]: F1-measure of COn (correct onset), COff (correct offset), COnOff (correct onset and offset), COnP(correct onset and pitch), and COnPOff (Correct onset, offset and pitch). We used the default parameters of mir_eval, which sets the onset tolerance to 50 ms, the offset tolerance to larger value between 50 ms and 0.2 of note duration, and the 50 cents for the pitch tolerance. Also, we report the results when the onset/off thresholds are 100 ms considering the softness of singing onsets. ### Training Details We computed 80 bin mel-spectrogram \(X\) with 320 samples in hop size (20 ms) and 1024 samples in FFT size after resampling audio files to 16 kHz. For the modified _ConvNet_ module, we set 48/48/96 nodes to the convolutional layers and 768 nodes to the dense layer. We used 768 nodes in all bidirectional LSTM layers and set the last FC layer in the note onset/activation detector to have two separate nodes for onset and activation detection, respectively. For the label smoothing, we used a scale factor of 5 to extend the label length to 100 ms, which shows the best results in our experiment. To train the note onset/offset detection network, we used the AdamW optimizer [21] with a batch size of 8 and a learning rate of 1e-6. We reduced the learning rate with a reducing factor of 0.98 for every 1000 steps. While training, we used the random audio segment with 5 seconds. The validation set was evaluated for every 500 steps and we stopped training when there is no advance in the model for 10 validation steps. To train the phoneme classifier, we used the Adam optimizer with a batch size of 16 and a learning rate of 2e-4. We reduced the learning rate with a reducing factor of 0.98 for every 900 steps. We validated the model with every 500 steps for the phoneme classifier and trained the model while there is no advance in the model for 5 validation steps. ## 5 Results and Discussions ### Ablation Study We conducted an ablation study to see the effect of input features and model architectures. The proposed model shown in Figure 2 \begin{table} \begin{tabular}{l|c|c c c c|c c c c c} \hline \hline \multirow{2}{*}{Training dataset Evaluation dataset} & \multicolumn{4}{c}{SSVD v2.0} & \multicolumn{4}{c}{CSD-refined} & \multirow{2}{*}{SSVD v2.0} \\ & & & \multicolumn{2}{c}{ISMIR2014} & \multicolumn{2}{c}{SSVD v2.0} & \multicolumn{2}{c}{CSD-refined} & \multicolumn{2}{c}{ISMIR2014} & \multicolumn{2}{c}{SSVD v2.0} \\ \hline & Feature & COn & COff & COn & COff & COn & COff & COn & COff & COn & COff \\ \hline (a) Single CRNN & \(X\) & 0.8244 & 0.7751 & 0.8956 & 0.8983 & 0.9797 & 0.9719 & 0.8812 & 0.7524 & 0.8866 & 0.8007 \\ (b) Dual CRNNs + one RNN & \(X,X\) & 0.9133 & 0.8513 & 0.9486 & 0.9566 & 0.9888 & 0.9838 & 0.9904 & 0.7636 & 0.8988 & 0.8089 \\ \hline (c) Single CRNN & \(\hat{P}\) & 0.8655 & 0.7776 & 0.9223 & 0.9105 & 0.9890 & 0.9660 & 0.9048 & 0.7685 & 0.9063 & 0.8296 \\ (d) Dual CRNNs + one RNN & \(\hat{P},\hat{P}\) & 0.9094 & 0.8310 & 0.9342 & 0.9470 & 0.9907 & 0.9638 & 0.9090 & 0.7733 & 0.9142 & 0.8336 \\ \hline (e) Dual CNNs + one RNN & \(X,\hat{P}\) & 0.9024 & 0.8349 & 0.9439 & 0.9420 & 0.9877 & 0.9791 & 0.9016 & 0.7852 & 0.9098 & **0.8340** \\ (f) Dual CNNs + two RNNs & \(X,\hat{P}\) & 0.9230 & 0.8538 & 0.9496 & 0.9531 & 0.9914 & 0.9839 & **0.9150** & **0.7804** & **0.9199** & 0.8328 \\ (g) Dual CRNNs + one RNN & \(X,\hat{P}\) & **0.9305** & **0.8576** & **0.9569** & **0.9692** & **0.9923** & **0.9864** & 0.9145 & 0.7723 & 0.9166 & 0.8257 \\ \hline \hline \end{tabular} \end{table} Table 1: Onset/Offset detection results from various neural network architectures with two input features. \(X\) and \(\hat{P}\) denote mel-spectrogram and PPG, respectively. (g) corresponds to the neural network architecture in Figure 2. corresponds to "Dual CRNNs + one RNN" in (g). We first compare it to a single CRNN model with only one type of features (either mel spectrogram in (a) or PPG in (c)). Considering that the model architecture can affect the performance, we also compared the proposed model to the same "Dual CRNNs + one RNN" but with one type of input features for both inputs (either mel spectrogram in (b) or PPG in (d)). Given the proposed model, we also removed the RNN module in each CRNN branch in (e), and then stacked another RNN module on top of (e) in (f). Table 1 show the onset/offset detection results of all compared models. Single CRNNs with only one input features in (a) and (c) have significantly lower accuracy than the proposed model in (g). The gap is relatively lower when the model was trained with CSD. Interestingly, the single CRNN model with PPG consistently outperformed the one with mel spectrogram. The results from the same model architecture with different input features in (b), (d), and (g) shows that using both mel-spectrogram and PPG is more effective than using either one of them. However, the gaps are less significant than those in the comparison with single CRNN in (a) and (c). This indicates that model architecture is also important to improve the performance. Likewise, the results in (e), (f), and (g) show that the design choice of neural network affects the performance. Since CSD is a small dataset, the proposed model have a tendency to overfit it. Overall, the propose model in (g) shows the best performance. We further investigated the effect of the input features by looking into the recall accuracy for two special types of onsets: re-onset and transition. They are note onsets which have 20 ms or less apart from the offset of the previous note. The difference between the two types is whether the pitch changes (transition) or not (re-onset). The re-onset usually occurs when the syllable in lyrics or energy changes while continuing the same pitch. Note that, since our model does not predict the onset types, only recall accuracy can be computed. As shown in Figure 3, the models with mel-spectrogram (in red) tend to detect more transitions, indicating that it is more sensitive to pitch change. On the other hand, the models with PPG (in blue) tend to detect more re-onsets, showing that it captures phonetic changes well. Lastly, the models with both features have more balanced accuracy in both transition and re-onset. The demo examples, more analysis, and pre-trained models are available on the companion website. 2 Footnote 2: [https://seyong92.github.io/phoneme-informed-transcription-blog/](https://seyong92.github.io/phoneme-informed-transcription-blog/) ### Comparison with Prior Work Table 2 shows the comparison with prior work on the ISMIR2014 dataset, which has been widely used for singing voice onset/offset detection (or note segmentation). For fair comparison, we retrained a recent state-of-the-art model [8] with the same dataset we used for the proposed model. Our proposed model outperforms the state-of-the-art model in onset F-score in both tolerances while it is slightly worse in offset F-score in 50ms tolerance. The publicly available note transcription software (TONY) and model package (Omnizart) have significantly lower accuracy than the two models. Finally, to see the performance for singing note transcription including pitch information, we measured COnP and COnPOff on ISMIR2014 and SSVD v2.0 in Table 3. The results show that the proposed model achieves consistently better performances than TONY and Omnizart. ## 6 Conclusion We presented a neural network architecture for note-level singing transcription that takes advantage of PPG on top of mel-spectrogram. Through the ablation study, we examined various architectures along with the two input features, showing that the additional phonetic information is effective in singing onset/offset detection. Also, we showed that the proposed model outperforms the compared models on ISMIR2014 and SSVD v2.0. For future work, we plan to explore models that effectively handle weak supervision from noisy melody and lyrics labels on a large-scaled dataset [24]. \begin{table} \begin{tabular}{c c c c c c c c c c c c c} \hline \hline Model & \multicolumn{3}{c}{COn (50ms)} & \multicolumn{3}{c}{COn (100ms)} & \multicolumn{3}{c}{COff (50ms)} & \multicolumn{3}{c}{COff (100ms)} \\ & P & R & F & P & R & F & P & R & F & P & R & F \\ \hline TONY [22] & 0.7068 & 0.6326 & 0.6645 & 0.8402 & 0.7486 & 0.7877 & 0.7862 & 0.6981 & 0.7358 & 0.8405 & 0.7471 & 0.7870 \\ Omnizart [7, 23] & 0.7797 & 0.8229 & 0.7951 & 0.8667 & 0.9153 & 0.8843 & 0.7698 & 0.8132 & 0.7852 & 0.8394 & 0.8842 & 0.8554 \\ MusicYOLO (retrained) [8] & 0.9427 & 0.8970 & 0.9176 & **0.9711** & 0.9247 & 0.9456 & **0.8924** & **0.8504** & **0.8693** & **0.9476** & 0.9024 & 0.9227 \\ **Proposed** & **0.9448** & **0.9188** & **0.9305** & 0.9652 & **0.9387** & **0.9506** & 0.8701 & 0.8473 & 0.8576 & 0.9429 & **0.9176** & **0.9290** \\ \hline \hline \end{tabular} \end{table} Table 2: Onset/Offset detection results on ISMIR2014. Both of MusicYOLO and the proposed model were trained with SSVD v2.0. Omnizart is a pretrained note transcription model package (not with SSVD v2.0). Tony is a free, open-source application for pitch and note transcription. \begin{table} \begin{tabular}{c c c c} \hline \hline & ISMIR2014 & \multicolumn{2}{c}{SSVD v2.0} \\ \hline Model & COnP & COnPOff & COnP & COnPOff \\ \hline Tony [22] & 0.6009 & 0.4621 & 0.7311 & 0.6794 \\ Omnizart [7, 23] & 0.6174 & 0.4992 & 0.6047 & 0.5151 \\ **Proposed** & **0.8975** & **0.7728** & **0.8558** & **0.8303** \\ \hline \hline \end{tabular} \end{table} Table 3: Note transcription results on ISMIR2014 and SSVD v2.0. The proposed model was trained with SSVD v2.0 Figure 3: Transition and re-onset recall of the models in the ablation study on ISMIR2014. The red triangle is the model with mel-spectrogram, the blue square is the model with PPG, and the green circle is the model with both features.
2301.11164
A Graph Neural Network with Negative Message Passing for Graph Coloring
Graph neural networks have received increased attention over the past years due to their promising ability to handle graph-structured data, which can be found in many real-world problems such as recommended systems and drug synthesis. Most existing research focuses on using graph neural networks to solve homophilous problems, but little attention has been paid to heterophily-type problems. In this paper, we propose a graph network model for graph coloring, which is a class of representative heterophilous problems. Different from the conventional graph networks, we introduce negative message passing into the proposed graph neural network for more effective information exchange in handling graph coloring problems. Moreover, a new loss function taking into account the self-information of the nodes is suggested to accelerate the learning process. Experimental studies are carried out to compare the proposed graph model with five state-of-the-art algorithms on ten publicly available graph coloring problems and one real-world application. Numerical results demonstrate the effectiveness of the proposed graph neural network.
Xiangyu Wang, Xueming Yan, Yaochu Jin
2023-01-26T15:08:42Z
http://arxiv.org/abs/2301.11164v1
# A Graph Neural Network with Negative Message Passing for Graph Coloring ###### Abstract Graph neural networks have received increased attention over the past years due to their promising ability to handle graph-structured data, which can be found in many real-world problems such as recommended systems and drug synthesis. Most existing research focuses on using graph neural networks to solve homophilous problems, but little attention has been paid to heterophily-type problems. In this paper, we propose a graph network model for graph coloring, which is a class of representative heterophilous problems. Different from the conventional graph networks, we introduce _negative message passing_ into the proposed graph neural network for more effective information exchange in handling graph coloring problems. Moreover, a new loss function taking into account the self-information of the nodes is suggested to accelerate the learning process. Experimental studies are carried out to compare the proposed graph model with five state-of-the-art algorithms on ten publicly available graph coloring problems and one real-world application. Numerical results demonstrate the effectiveness of the proposed graph neural network. graph neural networks, graph coloring, negative message passing, self-information, aggregator ## I Introduction Different types of data, such as traditional unordered data, time series structured data, and graph-structured data may be encountered in solving real-world optimization problems. Graph-structured data contains rich relationship information between the attributes, making it more challenging to effectively learn the knowledge in the data using traditional machine learning models. Recently, graph neural networks (GNNs) have become extremely popular in the machine learning community, thanks to their strong ability to capture relational information in the data. Many different types of GNNs have been proposed, and they can be usually categorized according to different aggregation methods, such as graph convolutional network (GCN) [1], GraphSAGE [2], graph attention network (GAT) [3], among others. Since GNN enables the nodes to aggregate their neighbors' information, it becomes a powerful tool for solving problems in social networking [4], bioinformatics [5], and community detection [6], to name a few. Node classification is often considered as bi- or multi-class classification problems by inputting the features of nodes and edges and outputting each node's probability belonging to different classes. During the optimization process, each node combines information of its own and its neighbors to generate embedding vectors. By representing node and edge features in a higher dimensional space, embedding vectors are used to solve problems by downstream machine learning tasks in a non-autoregression or autoregression way [7]. However, most of the optimization problems mentioned above are continuous and homophilous. That is, two nodes between one edge tend to be very similar, i.e., have similar embedding vectors. These homophilous problems described as graphs have many real-world applications, and have been studied extensively [8, 9, 10]. On the contrary, some problems contain heterophily property [11] where the connected nodes have as different embeddings as possible. For example, graph coloring problems (GCPs) are a type of classical heterophilous problems because they are defined to have different colors between the connected nodes. GCPs have attracted increased research interest since many real-world applications can be formulated as graph coloring problems, such as arranging timetables and schedules, managing air traffic flow, and many others [12]. In fact, GCPs, like most combinatorial optimization problems (COPs), are NP-hard problems, which require highly intensive computational costs to obtain the exact optimal solutions. Luckily, some approaches have been proposed to find approximate optimal solutions of COPs with the help of GNNs within an acceptable period of computation time [13, 14, 15]. For example, Liu _et al._[16] proposed two supervised residual gated GCNs to directly predict the entire Pareto set for multi-objective facility location problems. GNNs have inherent advantages to represent GCPs and solve such graph-structured problems, as samples (nodes) can aggregate information from their neighbors. Recently, some efforts have been made to apply GNNs to solve GCPs, and different structured graph neural networks have been proposed. GNN-GCP [17] uses a network containing a GNN to predict the chromatic number of a given graph and uses the generated embedding vectors for clustering to obtain a color assignment. Li _et al._[18] explored several rules for using a GNN to solve GCPs, and proposed a graph neural network called GDN. PI-GNN [19] is inspired by a Potts model, and it is shown to perform well by combining with a novel loss function. The above algorithms basically follow the classical network structures, which are proposed to solve homophilous problems, such as node classification and link prediction [20]. However, these structures may not necessarily be suited for solving GCPs, and improvements should be made in the structural design to better adapt to the heterogeneous characteristics of GCPs. For this reason, a negative message passing strategy is proposed considering the requirements for GCPs, i.e., two connected nodes should have as different embedding vectors as possible. Meanwhile, color assignment with no conflict and stable convergence during the training process are both desired when solving GCPs. Therefore, a loss function consisting of a utility objective and a convergence objective is proposed to guarantee the good performance of the proposed neural network and a stable training process. Section II presents the preliminaries of this work, including the definition of graph coloring problems, the aggregation methods of classical GNNs, and the related work that use GNNs to solve graph coloring problems. In Section III, the proposed graph network model with negative message passing and a new loss function is detailed. Section IV describes the experimental results on the benchmark graph coloring problems and a real-world problem. Ablation studies and analysis of the computational complexity are also presented. Finally, conclusions and future work are given in Section V. ## II Preliminaries ### _Definition of Graph Coloring Problems_ We consider an undirected graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}=\{1,2,\ldots,n\}\) is the node set, and \(\mathcal{E}\) is the edge set connecting two nodes, represented by \((u,v)\in\mathcal{E}\). The adjacency matrix \(A\) of a graph \(\mathcal{G}\) also represents the connection between the nodes. When nodes \(i\) and \(j\) are connected, \(A(i,j)\) and \(A(j,i)\) are equal to 1; otherwise, elements in these two positions of \(A\) are equal to 0. Graph coloring problems [21, 22, 23] aim to minimize the number of conflicts while using the minimum number of colors. Mathematically, GCPs can be formulated in two different forms. In the first formulation, GCPs are defined as a constrained minimization problem as follows: \[\min k, \tag{1}\] \[\mathrm{s.t.}\sum_{(u,v)\in\mathcal{E}}f(u,v)=0,\] where \(f(u,v)\) represents the clash between neighboring nodes \(u\) and \(v\), and \((u,v)\in\mathcal{E}\). If \(u\) and \(v\) share the same color, \(f(u,v)=1\); otherwise, \(f(u,v)=0\). \(k\) is the number of colors used in graphs. If \(k\) colors can fill a graph with no clash, this graph is called \(k\)-colorable. The chromatic number \(\mathcal{X}(\mathcal{G})\) is the minimum of \(k\), denoting the optimum number of colors without resulting in any conflicts in \(\mathcal{G}\). Alternatively, GCPs can also be expressed as a constraint satisfaction problem with a given number of colors \(k\): \[\min \sum_{(u,v)\in\mathcal{E}}f(u,v), \tag{2}\] \[\mathrm{s.t.} k.\] This formulation aims to minimize the clashes in a graph, given that the number of colors that can be used is \(k\). The optimization problems defined in Eq. (1) and Eq. (2) are slightly different, which may be suited for different requirements or priorities in solving the same problem. Take the assignment of taxis to customer requests as an example. In Eq. (1), the customer requests are the top priority, and a minimal number of taxis should be found. In this case, the assumption is that there is a sufficient number of taxis. However, if the number of taxis is limited during the peak period, the assignment that can satisfy the majority of customer requests would be preferred. This work will focus on minimizing the number of total conflicts under a certain given number of colors, i.e., the GC problem will be solved according to the formulation in Eq. (2). An illustrative example consisting of six nodes and seven edges is given in Fig. 1. In Fig. 1 (a), three colors are used, and no connected neighboring nodes have the same color, meaning that this is an optimal solution to the given example. Fig. 1: Example solutions to graph coloring with six nodes and seven edges. (a) An optimal solution for the given problem, where \(\mathcal{X}(\mathcal{G})=3\). (b) The number of total conflicts is zero, but the number of used colors can be reduced. (c) A solution using only two colors, while the \(\mathcal{X}(\mathcal{G})\) being \(3\), and the number of total conflicts should be minimized. By contrast, as shown in Fig. 1(b), there is no conflict between the neighboring nodes, but the number of colors used can still be further minimized. In Fig. 1 (c), on the other hand, only two colors have been used, and there is one conflict in color. Overall, Fig. 1 (a) provides an ideal solution, while solutions in Fig. 1 (b) and (c) may also be applicable in different scenarios. ### _Classical GNNs_ The main difference between graph neural networks and other neural networks is that the nodes (samples) in GNNs can gather information from their neighbors before being projected to the next layer, due to the existence of edges [24, 25]. Aggregating embeddings from neighbors and combining them with the node's embedding are two critical operators in GNNs called aggregation and combination. Authors in [18] consider GNNs with aggregation and combination operators as AC-GNNs. They make GNNs powerful in exploiting and revealing relationships between nodes. The input to GNNs is usually the feature vector or randomly-generated vector of each node. With an adjacency matrix, one node generates a new embedding in the next hidden layer by combining its own and neighbors' aggregated embedding. Therefore, embeddings in the first layer contain first-order neighborhood information and the \(k\)-th hidden layer captures the \(k\)-th neighborhood information. The aggregation and combination operators may differ in various GNNs based on different purposes and optimization tasks. In the following, we briefly review some classical and popular aggregation and combination methods. GCN [1] considers the equal importance of a node and its neighbor information, which integrates the aggregation and combination methods. The embedding of the \(k\)-th layer is calculated by \(H^{k}=\sigma(\hat{A}H^{k-1}W^{k-1})\), where \(\hat{A}=\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2}}\) is the normalized adjacency matrix with self connections, \(D\) is the degree matrix, and \(W^{k-1}\) is a trainable weight matrix of the \(k-1\)-th layers. On the other hand, some GNNs separate the aggregation and combination methods, assigning different importance to the node embedding and the aggregated neighborhood embedding. GraphSAGE [2] is one example of this type; the embedding of node \(v\) in the \(k\)-th hidden layer is obtained by \(h_{v}^{k}\leftarrow\sigma(W^{k-1}\cdot CONCAT(h_{v}^{k-1},h_{\mathcal{N}(v)} ^{k}))\), where \(h_{\mathcal{N}(v)}^{k}\gets AGGREGATE(h_{u\in\mathcal{N}(v)}^{k-1})\). There are also many other types of aggregation and combination methods proposed very recently, such as HetGNN [26], DNA [27], non-local graph neural networks [28], and SAR [29]. ### _Related Work_ Only sporadic work on solving graph coloring problems with the help of graph neural networks have been reported, which are based on supervised or unsupervised learning. GNN-GCP [17] generates color embeddings and node embeddings with a process of learning whether the graph is \(k\)-colorable or not. At first, \(2^{15}\) positive and \(2^{15}\) negative GCP instances are generated with ground truth given by a GCP solver. The proposed neural network learns the difference between the ground truth and the prediction by using the binary cross entropy as a loss function. The prediction is gotten through a GNN to aggregate neighbor information, an RNN to update the embeddings, and an MLP to get the final logit probability. If the network predicts a graph is k-colorable, the second stage is applied by clustering vertex embeddings using k-means, and nodes in the same cluster share the same color. Different from the above work that relies on supervised learning and a set of pre-solved GCP instances, unsupervised learning has also been adopted to tackle GCPs by constructing a loss function without requiring the ground truth. In general, the output of unsupervised learning for GCPs are probability vectors of the nodes, indicating which color should be assigned to each node. GDN [18] uses a margin loss function, which minimizes the distance between a pre-defined margin and a Euclidean distance between node pairs. Schuetz _et al._[19] take advantage of the close relationship between GCPs and the Potts model so that the partition function of the Potts model can be converted to the chromatic function of \(\mathcal{G}\) mathematically. Besides, the Potts model only distinguishes whether neighboring spins are in the same state or not, which is very similar to the definition of GCPs. Consequently, a loss function is proposed in PI-GNN that minimizes the inner product of node pairs. Generally, loss functions in unsupervised learning usually aim to minimize the similarity between the connected nodes in solving heterophilous problems such as graph coloring problems. Besides, solving GCPs with heuristic algorithms has also been studied. Tabucol [30] is a Tabu-based algorithm that defines Tabu moves and a Tabu list tailored for GCPs. It changes the color (for example, red) of one randomly selected node into another color (green) to reduce the number of conflicts. The color set (green, red) will be put into the Tabu list for certain iterations to restrict this node from changing to red. An evolutionary algorithm, HybridEA [31] is proposed to modify traditional ways of generating offspring and use a Tabu-search method as the mutation operator. ## III The Proposed Graph Neural Network Model The main difference between heterophilous and homophilous optimization problems is that connected nodes in shared edges should express differently rather than similarly. For example, in node classification tasks, neighboring nodes should be classified into different classes instead of the same class. Therefore, the embeddings of connected nodes should be as different as possible, and the first-order neighbors need to pass negative messages to the node. On the other hand, the second-order neighborhood may contain information that positively impacts the node. Therefore, inspired by the heterophilous property of graph coloring problems, we proposed a new AC-GNN, which is called GNN-1N by mainly focusing on the first-order (1st) negative message passing strategy. To better solve GCPs, the proposed algorithm should take into account two objectives. One is to find a solution without conflict and the other is to achieve fast and stable convergence. Based on the above two objectives, a loss function for solving graph coloring problems in an unsupervised way is proposed. ### _Forward Propagation_ The forward propagation of the proposed framework is described in this subsection. We apply GraphSAGE [2] as a baseline GNN model, and some modifications specialized for solving GCPs are made. In the original paper of GraphSAGE, the authors give three aggregation methods: mean aggregator, LSTM aggregator, and pool aggregator. The embedding \(h_{v}^{k}\) of node \(v\) in the \(k\)-th layer using the mean aggregator is calculated as follows: \[\begin{split}& h_{\mathcal{N}(v)}^{k}=mean\{h_{u}^{k-1}\},\forall u \in\mathcal{N}(v),\\ & h_{v}^{k}=\sigma\left(W_{self}^{k}\cdot h_{v}^{k-1}+W_{neigh}^ {k}\cdot h_{\mathcal{N}(v)}^{k}\right),\end{split} \tag{3}\] where \(\mathcal{N}(v)\) is the connected node set of node \(v\), \(h_{\mathcal{N}(v)}^{k}\) is the aggregated embedding of neighborhood of \(v\), \(\sigma\) is the activation operator, \(W_{self}^{k}\) and \(W_{neigh}^{k}\) is the learnable weight matrix for the \(k\)-th layer. In this work, the mean aggregator is considered for improvement. At the beginning of forward propagation, the embedding of each node is usually the feature vector. However, some graphs in GCPs may not have features, so embeddings are generated randomly. In the first hidden layer, nodes gather the first-order neighborhood information, which should make negative contributions. Thus, the embedding of the first layer is obtained by \[\begin{split}& h_{\mathcal{N}(v)}^{1}=mean\{h_{u}^{0}\},\forall u \in\mathcal{N}(v),\\ & h_{v}^{1}=\sigma\left(W_{self}^{1}\cdot h_{v}^{0}-\alpha\cdot W _{neigh}^{1}\cdot h_{\mathcal{N}(v)}^{1}\right),\end{split} \tag{4}\] where \(h_{v}^{0}\) is the randomly generated feature of the input layer of node \(v\), \(\alpha\) is the trainable parameter controlling the negative influence of the neighborhood. Elements in \(W_{self}^{1}\) and \(W_{neigh}^{1}\) are all positive values distributed uniformly from 0 to 1, and \(\alpha\) is initialized to be \(0.5\). This strategy is reasonable and natural. For example, we assume that there is node \(v\) and its two neighbors \(u_{1}\) and \(u_{2}\), whose embeddings are \([0.8,0.6,0.1]\), \([0.7,0.1,0.1]\), and \([0.5,0.1,0.7]\), respectively. We let \(W_{self}^{1}\) and \(W_{neigh}^{1}\) be an identity matrix, and \(\alpha=0.5\), the embedding of node \(v\) is \([0.5,0.55,-0.1]\) calculated by Eq. (4). If embeddings also represent the probability of the assigned color, \(v\) and \(u_{1}\) conflict with each other before applying Eq. (4), as these two nodes both prefer the first color among the three given colors. The assigned color of node \(v\) changes into the second one after applying Eq. (4). On the other hand, if Eq. (3) is used as an updating strategy in the first hidden layer, \(v\) and \(u_{1}\) will remain to be conflicting with each other. According to the property of GNNs, the second hidden layer can aggregate the second-order neighborhood information, which may be able to contribute some helpful positive influence. Therefore, in the second-order layer, the original mean aggregator (Eq. (3)) of GraphSAGE is used to generate embeddings \(h_{v}^{2}\). For graph coloring problems, there are two ways to assign colors to nodes. Firstly, there are \(k\) sets containing nodes, and all nodes in one set have the same color. The second way uses the \(k\)-length probability vector of each node, and the node is assigned to the \(i\)-th color if the probability in the \(i\)-th position of the vector is the largest. This work considers GCPs as an unsupervised classification task, and the second way is used to assign \(k\) colors to different nodes. Therefore, the dimension of \(h_{v}^{2}\) equals the color number \(k\). The probability of node \(v\) can be obtained as follows: \[p_{v}=softmax\{h_{v}^{2}\}, \tag{5}\] where \(softmax\) is the softmax operator, i.e., \(p_{v}(j)=\frac{h_{v}^{2}(j)}{\sum_{i=1}^{k}h_{v}^{2}(i)}\). The final color assigned to \(v\) is the color with the highest probability. ### _Loss Function_ A loss function is proposed to achieve the utility-based and the convergence-based goal in an unsupervised way. The former is to minimize the conflicts between the connected nodes with a given number of colors, while the latter is to minimize the uncertainty of nodes, which can stabilize convergence. As no ground truth is available, the utility-based objective function uses the probability of nodes to reflect the relationship of connected nodes. The main idea is to maximize the difference or minimize the similarity between node pairs. Some loss functions have been proposed based on the above idea. Among them, the loss function inspired by the Potts model performs very well, which is therefore adopted as the utility-based objective function in this work: \[f_{utility}=\sum_{(u,v)\in\mathcal{E}}p_{v}^{T}\cdot p_{u}. \tag{6}\] Eq. (6) only aims to minimize the inner product between two probabilities, which can be considered as minimizing the similarity between two nodes. This is reasonable for solving heterophilous problems such as GCPs. Note that no other _a priori_ knowledge is required, making it suitable for solving other unsupervised learning problems. Fig. 2: The framework of the proposed method, where the first-order aggregator applies Eq. (4) and the second-order aggregator uses the original aggregator Eq. (3). Between the first and the second hidden layers, dropout is employed to avoid getting stuck in local optimums. In addition to maximizing the difference between the connected nodes, we also intend to increase the probability of one node being assigned a certain color to make the learning process more stable. Therefore, we introduce self-information to construct \(f_{conv}\), which is formulated as follows: \[f_{conv}=\sum_{i=1}^{n}p_{i}^{T}\cdot\log p_{i}. \tag{7}\] Self-information represents the amount of information in an event. If the value of self-information is large, it means this event contains more information, indicating a high uncertainty. Otherwise, the uncertainty of this event is low. In terms of GCPs, high self-information of one node means the probabilities of being assigned to different colors are approximately equal. It results in unstable convergence because the color assignment varies greatly under small weight changes. Therefore, nodes with small self-information hold more confidence in the current color assignments, which helps stabilize the convergence. By combining the terms in Eq. (6) and Eq. (7), we get the following loss function: \[\min F=f_{utility}+\lambda f_{conv}. \tag{8}\] where \(\lambda>0\) is a hyperparameter. ### _The Overall Framework_ As shown in Fig. 2, the framework mainly consists of the forward propagation and the optimization process. In the forward propagation, the randomly generated embedding \(h_{i}^{0}\) is assigned to the \(i\)-th node. If we take \(n_{1}\) as an example, it aggregates its first-order negative neighborhood embedding in the first hidden layer, and its second-order neighborhood embedding in the second layer. Between these two hidden layers, the dropout [32] is applied to prevent the solver from getting stuck in a local optimum. After \(n_{1}\) aggregates the two-hop neighborhood information, \(h_{1}^{2}\) is obtained, followed by the softmax function to get a probability vector \(p_{1}\). \(p_{1}\) contains \(k\) elements representing the probabilities of choosing \(k\) colors. After getting the probability vectors of nodes, the loss function \(F\) is calculated with two terms, namely \(f_{utility}\) and \(f_{conv}\). The AdamW optimizer is used to compute the gradient and update weights in the graph neural network. ## IV Numerical Experiments In this section, experiments on the COLOR dataset [33] are presented at first, and then an application to taxi scheduling is applied. An ablation study is given to demonstrate the stabilization ability of the proposed loss function, and finally, the computational complexity is analyzed. ### _Experiments on COLOR Dataset_ In this experiment, we use the publicly available COLOR dataset to evaluate the performance of the proposed algorithm and its peer methods. The COLOR dataset is a classical, widely used graph dataset in the field of graph coloring problems, where Myciel graphs are based on the Mycielski transformation and Queens graphs are constructed on \(n\) by \(n\) chessboard with \(n^{2}\) nodes. More detailed information on graphs can be found in Table 1, with the number of nodes and edges in each graph. Eq. (2) is optimized in the following experiments given the color number \(k\), which is shown in Table 1. Five algorithms are chosen as peer algorithms: Tabucol [30], HybridEA [31], GDN [18], PI-GCN [19], and PI-SAGE [19]. Tabucol and HybridEA are tabu-based heuristics algorithms. The rest three algorithms, GDN, PI-GCN, and PI-SAGE, are GNN-based unsupervised algorithms. The above five methods focus on minimizing conflicts with given numbers of colors. GNN-GCP mainly predicts the color number of a given graph. Therefore, it is not included in this comparison. The results in Table I show the conflicts of each graph (\(f(u,v)\) in Eq. (6)) found by the proposed GNN-1N and the algorithms under comparison. The results of Tabucol and HybridEA are taken from [18], with a maximum run time, i.e., 24 hours per graph, and the results of GDN, PI-GCN, and PI-SAGE are taken from [19]. For a fair comparison, we use the same maximum number of iterations (\(10^{5}\)) to run our methods (GNN-1N) on GPU. Besides, the early stopping mechanism is applied within \(10^{3}\) iterations if the value of the loss function changes less than \(0.001\). The hyperparameters, including the dimensions of \(h_{i}^{k}\), the learning rate \(\eta\) in the AdamW optimizer, the probability of dropout, and \(\lambda\) in the loss function \(F\) are optimized in a similar way to that in [19]. All results listed in Table I are the best coloring results of all methods, from which we can see that the proposed method performs the best on all graphs, especially on large and dense graphs, such as queen11-11 and queen13-13. On the contrary, traditional tabu-based methods and other machine-learning methods cannot find as few conflicts as GNN-1N does. To gain more insights into the solutions found by our methods, we plot the color assignment of queen13-13 in Fig. Fig. 3: The final color assignment of queen13-13 with 169 nodes given by our method. Only 15 conflicts highlighted in red lines exist out of 3328 edges (in grey lines) in this figure. 3. There are 169 nodes in this graph, and 13 colors should be used to color nodes. There are 3328 edges plotted with grey lines and 15 conflicted edges highlighted with red lines. The normalized error rate is \(0.45\%=\frac{15}{3328}\times 100\%\), which is relatively small, indicating that our method has a good ability to find less-conflict color assignments. ### _Application_ In this section, we take the taxi scheduling problem as an example to show the ability of GNN-1N to solve graph-structured problems in real life. We give a simple scenario for taxis to customer requests. Seven customs call a taxi company to book taxis one day. They all plan to book a taxi during the evening rush hour from 17:00 to 18:00 and each confirms a time period. The task is to satisfy all customers' requests with an available number of taxis, assuming that only four taxis are available during this peak hour. To solve this problem, three steps are taken, which are 1) encoding, 2) optimization, and 3) decoding. The encoding step transfers the given timetable into a graph. As shown in Fig. 4 (a), the timetable with departure time and arrival time of seven customers is shown, and each customer is represented by a node in the graph. As customers with overlapping schedules cannot use the same taxi, two nodes should share one edge if two customers have time overlap. For example, the arrival time of customer \(u_{1}\) is earlier than the departure time of \(u_{2}\), so \(u_{1}\) and \(u_{2}\) are not connected. On the other hand, the first customer and the third customer plan to use a taxi from 17:13 to 17:15. Therefore, \(u_{2}\) and \(u_{3}\) are connected, as shown in Fig. 4 (b). After obtaining the graph description of the relationship between customers' schedules, the generated graph is optimized by GNN-1N with a specific color number. The color number is the number of available taxis, which is four here. Figure 4 (c) shows the final color assignments with four colors after optimization, and no conflict is found in the solution. According to the color assignment, seven customers are assigned into four groups, that is, \(\{u_{1},u_{2},u_{7}\}\), \(\{u_{3}\}\), \(\{u_{4},u_{6}\}\), and \(\{u_{5}\}\). ### _Ablation Study and Time Complexity Analysis_ An ablation study is conducted to show the effectiveness of self-information term \(f_{conv}\) (Eq. (7)) included in the loss function \(F\). Figure 5 shows the conflicts trained by the loss function with and without \(f_{conv}\) on queen6-6 and queen8-12 over \(10^{5}\) iterations. To be specific, Eq. (6) is the \(f_{utility}\) proposed in [19], and Eq. (8) is the loss function proposed in [19], the Eq. (8) is the loss function proposed in this paper with adding the self-information term. The hyperparameters are obtained directly from [19], and the \(\lambda\) in Eq. (8) is set to be \(0.25\). As we can see in Fig. 5, the conflicts curve obtained by Eq. (6) is unstable and fluctuates dramatically sometimes. By contrast, the conflicts curve trained by Eq. (8) decreases smoothly. The curves in Fig. 5 indicate the stabilization function of self-information, which can be attributed to the convergence term in Eq. (8). The runtime (in seconds) is plotted in Fig. 6, which shows the time required by GNN-1N to run \(10^{5}\) iterations on one graph coloring problem. It is reasonable that the runtimes increase with the number of nodes and edges increase, because the computational cost is mainly concentrated in the process of node aggregation and backpropagation. In general, the time Fig. 4: Taxi scheduling problem. (a) The timetable containing departure time and arrival time of seven customers. (b) Encoding the timetable into a graph coloring problem. (c) Optimizing the graph coloring problem with our unsupervised neural network method and decoding it. Under this scenario, customs and taxis are nodes and colors in graph coloring problems, respectively. complexity of GNN-1N is similar to PI-GNN [19], as no additional computation is added to the proposed algorithm. ## V Conclusion and Future Work The graph coloring problem is a classical graph-based problem aiming to find a color assignment using a given number of colors. As the GC problem is an NP-hard problem, it is almost impossible to obtain a feasible solution in an acceptable time. Moreover, due to its graph property, such problems cannot be easily and effectively solved by conventional neural networks. In this work, we propose an unsupervised graph neural network (GNN-1N) tailored for solving GCPs, which combines negative message passing with normal message passing to handle heterophily. Besides, a loss function with the utility-based objective and convergence-based objective is proposed for unsupervised learning. Experimental results on public datasets show that GNN-1N outperforms five state-of-the-art peer algorithms. In addition, a toy real-world application of graph coloring problems is also given to demonstrate further the effectiveness of GNN-1N. Solving graph-based heterogeneous problems with GNNs is still in its infancy. The following three improvements could be made on the proposed model. First, we can consider pre-/post-processing to further decrease the number of conflicts. Second, the dynamic graph coloring problems are worthy of investigation, as the conditions may change in the real world. Finally, the fairness of color assignment is an interesting topic to examine. For example, each color should be used roughly the same number of times while making sure that there is no conflict in connecting nodes. For the taxi scheduling problem, each taxi should have a similar number of customers. Therefore, fairness coloring is of great practical importance.
2310.00728
Physics-Informed Graph Neural Network for Dynamic Reconfiguration of Power Systems
To maintain a reliable grid we need fast decision-making algorithms for complex problems like Dynamic Reconfiguration (DyR). DyR optimizes distribution grid switch settings in real-time to minimize grid losses and dispatches resources to supply loads with available generation. DyR is a mixed-integer problem and can be computationally intractable to solve for large grids and at fast timescales. We propose GraPhyR, a Physics-Informed Graph Neural Network (GNNs) framework tailored for DyR. We incorporate essential operational and connectivity constraints directly within the GNN framework and train it end-to-end. Our results show that GraPhyR is able to learn to optimize the DyR task.
Jules Authier, Rabab Haider, Anuradha Annaswamy, Florian Dorfler
2023-10-01T17:02:29Z
http://arxiv.org/abs/2310.00728v2
# Physics-Informed Graph Neural Network for Dynamic Reconfiguration of Power Systems ###### Abstract To maintain a reliable grid we need fast decision-making algorithms for complex problems like Dynamic Reconfiguration (DyR). DyR optimizes distribution grid switch settings in real-time to minimize grid losses and dispatches resources to supply loads with available generation. DyR is a mixed-integer problem and can be computationally intractable to solve for large grids and at fast timescales. We propose GraPhyR, a Physics-Informed Graph Neural Network (GNNs) framework tailored for DyR. We incorporate essential operational and connectivity constraints directly within the GNN framework and train it end-to-end. Our results show that GraPhyR is able to learn to optimize the DyR task. Graph Neural Network, Dynamic Reconfiguration, Physics Informed Learning. ## I Introduction The global energy landscape is rapidly evolving with the transition towards renewable energy generation. This transition brings numerous benefits for the climate, but also presents challenges in effectively controlling and optimizing power systems with high penetration of intermittent renewable generation such as solar and wind. New operating schemes are needed to ensure efficient and reliable grid operations in the presence of intermittent generation. Significant research efforts focus on optimizing resource dispatch and load flexibility towards reducing costs and increasing grid efficiency; however there remains efficiency gains to be had when co-optimizing grid topology. To this end, we propose _Dynamic Reconfiguration_ (DyR) in a distribution grid to increase operating efficiency by co-optimizing grid topology and resource dispatch. The distribution grid reconfiguration problem involves the selection of switch states (open/closed) to meet demand with available generation, while satisfying voltage and operating constraints. Grid reconfiguration can re-route power flows to reduce power losses [1], increase utilization of renewable generation [2, 3], and re-energize grids after contingencies. Presently, DyR is deployed for loss reduction in the EU [2], and for fault conditions in the US using rule-based control schemes. The widespread growth of distributed generation, storage, and electric vehicles creates the opportunity for DyR for loss reduction, whereby topology and dispatch decisions are made _fast and frequently_ in response to faster resource timescales; as solar generation varies, the topology is adapted to supply loads in close proximity to generation, thus reducing losses and improving voltage profiles across the grid. The DyR problem is a mixed integer program (MIP) due to the discrete nature of switch decisions. It is well known that MIPs are NP-hard (i.e. cannot be solved in polynomial time) and thus may be computationally intractable for large-scale problems. A distribution substation may have 10 feeders each with 5 switches, resulting in over \(10^{15}\) possible topologies. If operating constraints and load conditions result in only \(1\%\) of these topologies, the search space remains prohibitively large for traditional approaches. One option is to restrict the optimization to a single feeder, however optimizing topology over all feeders permits load transfer and generation exports. Machine learning (ML) offers an alternative by shifting the computational burden to offline training, thereby making dynamic decision making via the online application of ML algorithms computationally feasible. Recent works propose ML for solving MIPs and combinatorial optimization (CO) [4], either in an end-to-end fashion or to accelerate traditional solvers. Graphs play a central role in formulating many CO problems [5], representing paths between entities in routing problems, or interactions between variables and constraints in a general CO [6, 7]. The use of Graph Neural Networks (GNNs) is also being explored to leverage the underlying graph structure during training and identify common patterns in problem instances. The traveling salesman problem (TSP) is a fundamental problem in CO and a standard benchmark which has been extensively studied with traditional optimization techniques. Recently, GNNs have been used to solve the TSP with good performance and generalizability [8, 9, 10]. In this work we leverage GNNs to learn the power flow representation for reconfiguration. Grid reconfiguration for distribution grids has been studied with varying solution methodologies including knowledge-based algorithms and single loop optimization [1, 11], heuristic methods [12, 13], and reformulation as a convex optimization problem using big-\(\mathcal{M}\) constraints [14, 15, 16]. However, these methods are not computationally tractable for large-scale optimization in close to real-time applications, and may be limited to passive grids (i.e. no local generation). Machine learning approaches for DyR have also been proposed [17, 3]. In [17] the DyR problem is formulated as a Markov decision process and solved using reinforcement learning. In [3] a light-weight physics-informed neural network is proposed as an end-to-end learning to optimize framework with guarantee certified satisfiability of the power physics. A physics-informed rounding layer explicitly embeds the discrete decisions within the neural framework. These approaches show potential, but both are limited to a given grid topology and switch locations. Our approach is similar to that of [3] wherein we embed discrete decisions directly within an ML framework. The main contribution of this paper is **GraPhyR**, a graph neural network (**Gra**) framework employing physics-informed rounding [3] (**PhyR**) for DyR in distribution grids. GraPhyR is an end-to-end framework that learns how to optimize, and is enabled by four key architectural components: **(1) A message passing layer that models switches as gates:** The gates are implemented as a value between zero and one to model switches over a continuous operating range. Gates control the flow of information through switches in the GNN, modeling the control of physical power flow between nodes. **(2) A scalable local prediction method:** We make power flow predictions locally at every node in the grid using local features. The predictors are scale-free and so can generalize to grids of any topology and size. **(3) A physics-informed rounding layer:** We embed the discrete open/closed decisions of switches directly within the neural framework. PhyR selects a grid topology for each training instance upon which GraPhyR predicts a feasible power flow and learns to optimize a given objective function. **(4) A GNN that takes the electrical grid topology as input:** We treat the grid topology and switch locations as an input which permits GraPhyR to learn the power flow representation across multiple possible distribution grid topologies within and across grids. Thus GraPhyR can optimize topology and generator dispatch: (a) on multiple grid topologies seen during training, and (b) under varying grid conditions such as (un)planned maintenance of the grid. We demonstrate the performance of GraPhyR in predicting near-optimal and feasible solutions. We also show the effectiveness of GraPhyR in adapting to unforeseen grid conditions. The remainder of this paper is organized as follows. Section II presents DyR as an optimization problem. Section III presents the GraPhyR method and details the four key architectural components. Section IV presents the simulation results, and conclusions are drawn in Section V. ## II Reconfiguration as an Optimization Problem We consider DyR of distribution grids with high penetration of distributed generation. We model the power physics using Linearized DistFlow [1] as below: \[\min_{\mathbf{\psi}}\,f(\mathbf{x},\mathbf{\psi}) =\sum_{(i,j)\in\mathcal{A}}(p_{ij}^{2}+q_{ij}^{2})R_{ij}\] (1) s.t. \[p_{j}^{G}-p_{j}^{L} =\sum_{k:(j,k)\in\mathcal{A}\cup\mathcal{A}_{sw}}\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ### _Message Passing_ The GNN models the distribution grid topology as an undirected graph, with switch embeddings modeling the switches in the electrical grid. The GNN's message passing layers incorporate these embeddings as gates, which enables GraPhyR to learn the representation of linearized Ohm's law of (5) across multiple topologies in a physics-informed way. The input to the GNN are the grid topology and nodal loads, and the output is a set of node and switch embeddings which will be used to make reconfiguration, power flow, and voltage predictions. #### Iii-A1 Grid Topology as Input Data for Graph Structure An input to the GNN is the grid topology described by \(\mathcal{G}(\mathcal{N},\mathcal{A},\mathcal{A}_{sw})\), using which the GNN models the physical grid topology as an undirected graph \(\mathcal{G}(\mathcal{N},\mathcal{E},\mathcal{E}_{sw})\) with \(N\) nodes, \(M\) lines, and \(M_{sw}\) switches. Trivially, \(\mathcal{E}\) (\(\mathcal{E}_{sw}\)) represents the undirected communication links along the directed edges \(\mathcal{A}\) (\(\mathcal{A}_{sw}\)) to support message passing and extracting the problem representation in the embeddings. By including \(\mathcal{G}(\mathcal{N},\mathcal{A},\mathcal{A}_{sw})\) as an input to the GNN, our GraPhyR framework is able to adapt to changing grid conditions, rather than requiring a large training dataset with multiple scenarios. #### Iii-A2 Initial Node, Line, and Switch Embeddings The second input data to the GNN is the load data \(\mathbf{x}^{0}\) which defines the node embeddings. The load data contains the active and reactive power load \(p_{i}^{L}\) and \(q_{i}^{L}\) for each node \(i\) in the grid and thus determines the initial node embeddings \(x_{i}^{0}\) of every node \(i\) in the corresponding graph where \(\mathbf{x}^{0}=\left[x_{1}^{0},\ldots,x_{N}^{0}\right]^{T}=\left[(p_{0}^{L},q_ {0}^{L}),\ldots,(p_{N}^{L},q_{N}^{L})\right]^{T}\). The line embeddings are set to one and are not updated by the message passing layers. The switch embeddings determine the value of the gate and are randomly initialized, similar to randomly initializing weights in a neural network. The switch embeddings are updated through the message passing layers. Initial line and switch embeddings are given by \(z_{ij}^{0}\), \(\forall\{i,j\}\in\mathcal{E}\cup\mathcal{E}_{sw}\). #### Iii-A3 Message Passing Layers In each hidden layer of the GNN the nodes in the graph iteratively aggregate information from their local neighbors. Deeper GNNs have more hidden layers and thus have node embeddings which contain information from further reaches of the graph. For each node embedding \(x_{i}^{0}\) in the graph, the first message passing layer is defined in (13) where \(\mathcal{N}_{i}\) denotes the set of neighboring nodes to node \(i\). For each switch embedding \(z_{ij}^{0}\) in the graph, the first message passing layer is defined in (15). \[x_{i}^{1}=ReLU(W_{1}^{0}x_{i}^{0}+\sum_{j\in\mathcal{N}_{i}}\{W_ {2}^{0}\cdot f(z_{ij}^{0})\cdot x_{j}^{0}\}) \tag{13}\] \[f(z_{ij}^{0})=\begin{cases}sig(z_{ij}^{0})&\text{if }\{i,j\}\in \mathcal{E}_{sw}\\ 1&\text{otherwise}\end{cases}\] (14) \[z_{ij}^{1}=ReLU(W_{3}^{0}(x_{i}^{0}+x_{j}^{0})+W_{4}^{0}z_{ij}^{0}), \forall\{i,j\}\in\mathcal{E}_{sw} \tag{15}\] For the remaining message passing layers, denoted by \(l\in\{1,2,\ldots,\mathcal{L}-1\}\), a residual connection is added to improve prediction performance and training efficiency [18]. The resulting node and switch embeddings are: \[x_{i}^{l+1}=x_{i}^{l}+ReLU(W_{1}^{l}x_{i}^{l}+\sum_{j\in\mathcal{ N}_{i}}\{W_{2}^{l}\cdot f(z_{ij})\cdot x_{j}^{l}\}) \tag{16}\] \[f(z_{ij}^{l})=\begin{cases}sig(z_{ij}^{l})&\text{if }\{i,j\}\in \mathcal{E}_{sw}\\ 1&\text{otherwise}\end{cases}\] (17) \[z_{ij}^{l+1}=z_{ij}^{l}+ReLU(W_{3}^{l}(x_{i}^{l}+x_{j}^{l})+W_{4} ^{l}z_{ij}^{l}),\forall\{i,j\}\in\mathcal{E}_{sw} \tag{18}\] The line embeddings are trivially set to one. We omit residual connections in the first message passing layer to expand the input embeddings \(x_{i}^{0}\) with dimensions of the input data, to an arbitrarily large hidden embeddings dimension \(h\). This allows the GNN to learn more complex representations by extracting features in a higher dimensional space. #### Iii-A4 Gates We implement gates in the message passing layer by applying a sigmoid to the switch embeddings, as in (14) and (17). The function \(f(z_{ij})\) acts like a filter for the message passing between two neighboring nodes, attenuating the information signal if the switch is closed. The gate models the switches as a continuous switch (ex. a household light dimmer), controlling information flow in the same way a switch controls power flow between two nodes. #### Iii-A5 Global Graph Information In the final message-passing layer, we calculate a global graph embedding, \(x_{G}^{\mathcal{L}}=\sum_{i=1}^{N}x_{i}^{\mathcal{L}}\). This embedding offers information access across the graph and can reduce the need for an excessive number of message passing layers for sparse graphs, such as those in power systems. This improves the computational efficiency. Fig. 1: GraPhyR: proposed framework to solve the DyR problem. ### _Prediction_ After the \(\mathcal{L}\) message passing layers, the embeddings extracted from the input data are used to predict the switch open/close status and a subset of the power flow variables, denoted as independent variables. #### Iii-B1 Variable Space Partition We partition the variable space into independent and dependent variables. The independent variables constitute the active power flows \(p_{ij}\), nodal voltages \(v_{i}\), and switch open/close status \(y_{ij}\). The dependent variables constitute the reactive power flows \(q_{ij}\), and nodal generation \(\{p_{i}^{G},q_{i}^{G}\}\). We leverage techniques for variable space reduction to calculate the dependent variables from the independent variables, using constraints (2)-(12). This step ensures that the power physics constraints have certified satisfiability, as further discussed in Section III-E. This partition is non-unique. It critically depends on the structure of the given problem which determines the relationship between the sets of variables, and the neural architecture which determines the relationship between inputs, predictions, and consecutive neural layers. **We further advocate that the neural architecture itself must be physics-informed, to embed domain knowledge and physical constraints directly into the neural network, as we have done in GraPhyR.** #### Iii-B2 Local Prediction Method Our prediction method leverages two key observations: (i) the relationship between power flows and voltages are the same for any node-edge pair and are modelled by the physics equations (2)-(5); (ii) the binary nature of switches makes it inherently different from a distribution line. Using these, we define two local prediction methods which use multi-layer perceptrons: a line predictor (L-predictor) and a switch predictor (S-predictor), shown in Fig. 3. The L-predictor in (19) predicts power flow and the voltages of the two nodes connected by the line using the node and global embeddings. The S-predictor also predicts the probability for the switch to be closed, using the switch embeddings \(z_{ij}^{\mathcal{L}}\) in addition to the node and global embeddings, as in (20). All predictions are denoted with a hat (i.e. \(\hat{v}_{i}\)) and will be processed in subsequent layers to render the final topology and dispatch decisions. \[[\hat{p}_{ij},\hat{v}_{i}^{j},\hat{v}_{j}^{i}]=\text{L-predictor} [x_{i}^{\mathcal{L}},x_{j}^{\mathcal{L}},x_{G}^{\mathcal{L}}],\qquad\forall(i, j)\in\mathcal{A} \tag{19}\] \[[\hat{p}_{ij},\hat{v}_{i}^{j},\hat{v}_{j}^{i},\hat{y}_{j}]=\text{ S-predictor}[x_{i}^{\mathcal{L}},x_{j}^{\mathcal{L}},z_{ij}^{\mathcal{L}},x_{G}^{ \mathcal{L}}],\;\forall(i,j)\in\mathcal{A}_{sw} \tag{20}\] Our local predictors exploit the full flexibility of GNNs. They are permutation invariant to the input graph data; are independent of the size of the graph (scale-free); and are smaller than the corresponding global predictor for the same grid. The first feature means our framework is robust to changes in input data. The last two features means our framework is lightweight and scalable. This would not be possible with a global predictor which predicts all independent variables from node and switch embeddings across the graph. The size of the input and output layers of a global predictor would depend on the size of the graph and the number of switches, and is the limitation in [3]. Table I summarizes the size of local and global predictors for the reconfiguration problem, where \(h\) is the dimension of the hidden graph embeddings. ### _Voltage Aggregation and Certified Satisfiability of Limits_ The local predictions obtained from the L-predictor and S-predictor generate multiple instances of voltage predictions for each node as indicated by a superscript. Specifically, the number of instances corresponds to the degree of the node \(i\), \(|\delta_{\mathcal{E}\cup\delta_{\mathcal{E}_{sw}}}(i)|\). We aggregate the voltage predictions to a unique value for each node in the grid as \(\hat{v}_{i}=\frac{1}{|\delta_{\mathcal{E}\cup\delta_{\mathcal{E}_{sw}}}(i)|} \sum_{j:\{i,j\}\in\mathcal{E}\cup\delta_{\mathcal{E}_{sw}}}\hat{v}_{i}^{j}\). The voltage predictions are then scaled onto the box constraints (10) with \(v_{i}=\underline{v}\cdot(1-\hat{v}_{i})+\overline{v}\cdot\hat{v}_{i}\). Notably, by selecting voltages as an independent variable in our variable space partition, we certify that voltage limits across the grid will always be satisfied, a critical aspect of power systems operation. ### _Topology Selection using Physics-Informed Rounding_ The S-predictor provides probabilistic predictions for open/close decisions of each switch. We recover binary decisions using a physics-informed rounding (PhyR) algorithm [3]. We exploit the radiality of distribution grids, which requires \(\mathcal{S}=N-1-M\) switches to be closed so there are always \(N-1\) conducting lines. The PhyR method selects the \(\mathcal{S}\) switches with the largest probabilities \(\hat{y}_{ij}\) and closes them by setting the corresponding \(y_{ij}=1\); the remaining switches are opened, \(y_{ij}=0\). This enforces (6) and (11). Note that as distribution grid technologies advance, bidirectional and loop flows may be easily incorporated in new protection schemes. Fig. 3: Local predictions made by the switch and line predictors use the node and switch embeddings extracted after \(\mathcal{L}\) message passing layers. Fig. 2: Message passing layers where switches are denoted by red-dashed lines. The node and switch embeddings are represented by blue and red colored blocks respectively, where the number of squares per-block indicates the dimension of the embeddings \(h\). This would remove the radiality constraint, which GraPhyR can accommodate with suitable modifications to PhyR. A note must be made about the practical implementation: PhyR is implemented with \(\min\) and \(\max\) operators which return gradients of \(0\), "killing" the gradient information necessary for backpropagation. We preserve these gradients in the computational graph by setting all but one switches to binary values, those with \(\mathcal{S}-1\) largest probabilities. Training guides the remaining switch towards a binary value. ### _Certified Satisfiability of Power Physics_ The final neural layer recovers the full variable space and enforces power flow constraints through open switches. The following steps happen sequentially: 1. Given \(\mathbf{y}\) and the independent variables we compute the reactive power flows \(\hat{q}_{ij}\) using (4)-(5). 2. Given \(\mathbf{y}\) we enforce (7) and (8) as \(p_{ij}=(\hat{p}_{ij}-0.5)\cdot 2\mathcal{M}y_{ij}\) and \(q_{ij}=(\hat{q}_{ij}-0.5)\cdot 2\mathcal{M}y_{ij}\) respectively. By explicitly setting flows through open switches to zero we enforce the constraints in a hard way. 3. Active and reactive power nodal generation is calculated using (2) and (3), respectively. ### _Loss Function_ The neural network learns to optimize by using an unsupervised framework. It has two objectives: to minimize line losses in (1), and to minimize inequality constraint violations of generation constraint (9) and connectivity constraints (12). Denoting these constraints as \(h(\mathbf{x},\boldsymbol{\psi})\leq 0\), we regularize the loss function using a soft-loss penalty with hyperparameter \(\lambda\). The loss function is \(l=f(\mathbf{x},\boldsymbol{\psi})+\lambda||\text{max}\{0,h(\mathbf{x}, \boldsymbol{\psi})\}||_{2}\). **Remark 1**: _The inequality constraints (7), (8), and (10) have certified satisfiability by design of GraPhyR._ **Remark 2**: _Our loss function is unsupervised, and does not need the optimal solutions, which may be unknown or computationally prohibitive to compute._ ## IV Experimental Results ### _Dataset and Experiment Setup_ We evaluate GraPhyR on a canonical distribution grid BW-33 [1] with 33 nodes, 29 lines, and 8 switches. We generate a variant of BW-33, called \(\mathcal{G}_{1}\) with 33 nodes, 27 lines, and 10 switches. We use the dataset from [3] which introduces distributed solar generation in the grid with a penetration of \(25\%\) generation-to-peak load. Loads are perturbed about their nominal value as typically done in literature. The two networks are shown in Fig. 4. The dataset has 8600 data points per grid which are divided as \(80/10/10\) for training/validation/testing. We implement GraPhyR using PyTorch and train on the MIT supercloud [19]. GraPhyR has \(4\) message passing layers each with dimension \(8\) (\(\mathcal{L}=4,h=8\)). The L-predictor and S-predictor have a single hidden layer with dimension 24 and 32 respectively. We use \(10\%\) dropout, batch normalization, and ReLU activation in both predictors. The soft loss hyperparameter is \(\lambda=100\), big-\(\mathcal{M}\) relaxation parameter is \(0.5\) per unit (p.u.), and voltage bounds are \(\underline{v}=0.83,\overline{v}=1.05\) p.u. which adapts to the lossy behavior of BW-33 [1, 3]. We use ADAM optimizer with a learning rate of \(\gamma=5e^{-4}\), a batch size of \(200\), and train for 1500 epochs. We evaluate the performance of the neural framework using a committee of networks approach. We train 10 models with independent weight initialization and average the predictions across all models. ### _Performance Metrics_ We adopt the performance metrics defined in [3] to assess prediction performance. The asterisks notation (i.e. \(v^{*}\)) denotes the optimal solution obtain from a MIP solver. **Dispatch error:** optimality metric of mean-squared error (MSE) in optimal generator dispatch: \(\frac{1}{N}\sum_{j\in\mathcal{N}}{(p_{j}^{G}-p_{j}^{G*})^{2}}+(q_{j}^{G}-q_{ j}^{G*})^{2}\). **Voltage error (VoltErr):** optimality metric of MSE in nodal voltage prediction: \(\frac{1}{N}\sum_{j\in\mathcal{N}}{(v_{j}-v_{j}^{*})^{2}}\). **Topology error:** optimality metric of the Hamming distance [20] between two topologies, calculated as the ratio of switch decisions not in the optimal position: \(\frac{1}{M_{sw}}\sum_{(i,j)\in\mathcal{A}_{sw}}{(y_{ij}-y_{ij}^{*})^{2}}\). **Inequality violation:** feasibility metric of the magnitude of violations in constraint set, measuring the mean and maximum as \(\frac{1}{|h|}\sum_{k}\max{\{0,h^{k}(\mathbf{x},\boldsymbol{\psi})\}}\) and \(\max_{k}{\{\max\{0,h^{k}(\mathbf{x},\boldsymbol{\psi})\}}}\). **Number of violations exceeding a threshold:** feasibility metric of the number of inequality constraints which are violated by more than an \(\epsilon\) threshold: \(\sum_{k}{\mathbb{T}}_{\max{\{0,h^{k}(\mathbf{x},\boldsymbol{\psi})\}>\epsilon}}\). ### _Case (a). GraPhyR with Local vs. Global Predictors_ We first compare GraPhyR with local predictors to a variant with a global predictor, termed Global-GraPhyR. The global predictor determines all independent variables (real power flows, voltages, switch probabilities) using all node and line embeddings. We implement the global predictor with a single hidden layer of the same size as the input dimension. The global predictor has input/output dimensions of 328/78 as compared to the L-predictor and S-predictor with dimensions of 24/3 and 32/4 respectively. Note that the global predictor predicts one voltage per node so voltage aggregation is not needed. We also compare the performance of GraPhyR with that of prior work which use a simple neural network with two hidden layers [3]: _SiPhyR_ which employs PhyR; and _InSi_ which approximates a step function. Fig. 4: Grid topology of BW-33 (left) and the synthetic \(\mathcal{G}_{1}\) (right). Switches indicated with green dashed lines. Solar generator locations indicated with yellow nodes. Table II-(a) shows the prediction performance for these methods. We first observe that **the GNN frameworks achieve lower dispatch error**, with Global-GraPhyR outperforming SiPhyR by two orders of magnitude. The GNN uses topological information to optimize the dispatch and satisfy loads. Second, **the PhyR-based frameworks achieve lower topology errors** by up to 10%, by embedding the discrete decisions directly within the ML framework. However, the topology error remains high (\(>30\%\)), demonstrating the challenge in learning to optimize this combinatorial task. Finally, **SiPhyR and Global-GraPhyR achieve the best performance across feasibility metrics**, with lower magnitude and number of inequality violations. Notably, the maximum inequality violation is an order of magnitude higher for InSi which does not benefit from PhyR, and GraPhyR which makes local predictions. This is expected. First, PhyR explicitly accounts for binary variables within the training loop to enable the end-to-end learning: PhyR selects a feasible topology upon which the neural framework predicts a near-feasible power flow solution. Second, GraPhyR sacrifices some prediction performance for the flexibility to train and predict on multiple graphs. Figure 5 plots the mean inequality violations for GraPhyR. The constraints are always respected for voltage (by design) and connectivity (by constraint penalty). Nodal generation constraints are frequently violated as the lowest cost (lowest line losses) solution is to supply all loads locally. We next test the limits of topology prediction within our ML framework by comparing with a semi-supervised approach. The loss function includes a penalty on the switch status: \[l_{sm}=f(\mathbf{x},\boldsymbol{\psi})+\lambda||\text{max}\{0,h(\mathbf{x}, \boldsymbol{\psi})\}||_{2}+\mu||\mathbf{y}-\mathbf{y}^{*}||_{2} \tag{21}\] Table II-(a) shows the performance of the semi-supervised GraPhyR. We also include results of Supervised-SiPhyR from [3] which uses a regression loss for voltages, generation, and switch status, and an inequality constraint violation penalty: \[l_{sup}(z,\varphi)=\|(\mathbf{v}-\mathbf{v}^{*})^{2}+(\mathbf{p} ^{\mathbf{G}}-\mathbf{p}^{\mathbf{G}*})^{2}+(\mathbf{q}^{\mathbf{G}}-\mathbf{ q}^{\mathbf{G}*})^{2}\|_{2}^{2}\] \[+\|(\mathbf{y}-\mathbf{y}^{*})^{2}\|_{2}^{2}+\lambda||\text{max} \{0,h(\mathbf{x},\boldsymbol{\psi})\}||_{2} \tag{22}\] The results show that Semi-supervised GraPhyR outperforms Supervised-SiPhyR on topology error, achieving near-zero error. This substantial difference can be attributed to the GNN which embeds topological data directly within the framework. Although these (semi-)supervised approaches achieve good performance, they are not practicable. They require access to the optimal solutions, which may be computationally prohibitive to generate across thousands of training data points. A note must be made on computational time. Solving the DyR problem using Gurobi (a commercial MIP solver) takes on average 201 milliseconds for BW-33, and 18 seconds for a 205-node grid per instance. Actual computational times vary significantly with varying load conditions which stress grid voltages (Ex. 17-fold increase for the 205-node grid during high load periods [3]). In contrast, the inference time of GraPhyR is only 84 milliseconds for a batch of 200 instances. ### _Case (b). Prediction Performance on Multiple Grids_ A key feature of GraPhyR is its ability to solve the DyR problem across multiple grid topologies. We trained and tested GraPhyR on two grids (BW-33 and \(\mathcal{G}_{1}\)) that have the same number of nodes but different number of lines and switches. Table II-(b) shows these results. The performance of GraPhyR on the two grids is similar to that of GraPhyR on a single grid, showing that GraPhyR can learn the power flow representation across multiple topologies and across multiple grids. Fig. 5: Magnitude of the inequality violations for GraPhyR. The constraint sets on nodal generation, voltage limits, and connectivity constraints are separated by black vertical lines. ### _Case (c). Adapting to Changing Grid Conditions_ We next test GraPhyR on changing grid conditions, such as (un)planned maintenance by the grid operator or switch failure. Since power flows are highly correlated with the grid topology, changes in the set of feasible topologies due to maintenance or equipment failure can significantly change the prediction accuracy. Rather than training on multiple scenarios, we train only on the BW-33 grid for normal operating conditions and test on cases where a switch is required to be open or closed. Results are shown in Table II-(c). Generally the dispatch error, voltage error, and average inequality violation magnitudes remain similar to cases of normal operation. However, there is a notable increase in the number of inequality violations, and when forcing a switch open, an order of magnitude increase in the maximum inequality violations. Forcing a switch open removes an edge from the GNN graph. The resulting graph is more sparse, reducing access to information during message passing and changing the information contained in the node and switch embeddings. The topology error is more nuanced. When switch 36 is closed, there is an increase in voltage and topology error. This is because without any operator requirements on switch statuses, switch 36 remains optimally open for all load conditions. Thus, when switch 36 is required to be open, there is a significant decrease in topology error, by almost 10%. Since we did not trained on other scenarios, GraPhyR struggles to optimize the topology and predict voltages when the grid conditions deviate significantly from the training data - such as when switch 36 is closed. Similar performance degradation happens when switch 35 is required to be open; this switch is optimally closed for all load conditions. Interestingly, the status of switch 10 (open or closed) does not affect the topology error, although this switch is typically closed in the training data. There may be multiple (near-)optimal topologies with similar objective value. Regularizing the dataset or performance metrics against these multiple solutions may be necessary to improve prediction performance. ## V Conclusion We developed GraPhyR, an end-to-end physics-informed Graph Neural Network framework to solve the dynamic reconfiguration problem. We model switches as gates in the GNN message passing, embed discrete decisions directly within the framework, and use local predictors to provide scalable predictions. Our simulation results show GraPhyR outperforms methods without GNNs in learning to predict optimal solutions, and offers significant speed-up compared to traditional MIP solvers. Further, our approach adapts to unseen grid conditions, enabling real-world deployment. Future work will investigate the scalability of GraPhyR to larger grids (200+ nodes), approaches to reduce inequality constraint violations, and regularization strategies to improve topology prediction. Finally, further efforts are needed in developing good datasets with representative timeseries data in distribution grids.
2308.11335
Graph Neural Network-Enhanced Expectation Propagation Algorithm for MIMO Turbo Receivers
Deep neural networks (NNs) are considered a powerful tool for balancing the performance and complexity of multiple-input multiple-output (MIMO) receivers due to their accurate feature extraction, high parallelism, and excellent inference ability. Graph NNs (GNNs) have recently demonstrated outstanding capability in learning enhanced message passing rules and have shown success in overcoming the drawback of inaccurate Gaussian approximation of expectation propagation (EP)-based MIMO detectors. However, the application of the GNN-enhanced EP detector to MIMO turbo receivers is underexplored and non-trivial due to the requirement of extrinsic information for iterative processing. This paper proposes a GNN-enhanced EP algorithm for MIMO turbo receivers, which realizes the turbo principle of generating extrinsic information from the MIMO detector through a specially designed training procedure. Additionally, an edge pruning strategy is designed to eliminate redundant connections in the original fully connected model of the GNN utilizing the correlation information inherently from the EP algorithm. Edge pruning reduces the computational cost dramatically and enables the network to focus more attention on the weights that are vital for performance. Simulation results and complexity analysis indicate that the proposed MIMO turbo receiver outperforms the EP turbo approaches by over 1 dB at the bit error rate of $10^{-5}$, exhibits performance equivalent to state-of-the-art receivers with 2.5 times shorter running time, and adapts to various scenarios.
Xingyu Zhou, Jing Zhang, Chao-Kai Wen, Shi Jin, Shuangfeng Han
2023-08-22T10:24:42Z
http://arxiv.org/abs/2308.11335v1
# Graph Neural Network-Enhanced Expectation Propagation Algorithm for MIMO Turbo Receivers ###### Abstract Deep neural networks (NNs) are considered a powerful tool for balancing the performance and complexity of multiple-input multiple-output (MIMO) receivers due to their accurate feature extraction, high parallelism, and excellent inference ability. Graph NNs (GNNs) have recently demonstrated outstanding capability in learning enhanced message passing rules and have shown success in overcoming the drawback of inaccurate Gaussian approximation of expectation propagation (EP)-based MIMO detectors. However, the application of the GNN-enhanced EP detector to MIMO turbo receivers is underexplored and non-trivial due to the requirement of extrinsic information for iterative processing. This paper proposes a GNN-enhanced EP algorithm for MIMO turbo receivers, which realizes the turbo principle of generating extrinsic information from the MIMO detector through a specially designed training procedure. Additionally, an edge pruning strategy is designed to eliminate redundant connections in the original fully connected model of the GNN utilizing the correlation information inherently from the EP algorithm. Edge pruning reduces the computational cost dramatically and enables the network to focus more attention on the weights that are vital for performance. Simulation results and complexity analysis indicate that the proposed MIMO turbo receiver outperforms the EP turbo approaches by over 1 dB at the bit error rate of \(10^{-5}\), exhibits performance equivalent to state-of-the-art receivers with 2.5 times shorter running time, and adapts to various scenarios. Expectation propagation, graph neural network, MIMO turbo receiver, extrinsic information. ## I Introduction Multiple-input multiple-output (MIMO) has the potential to improve the link throughput by orders of magnitude and has become the key enabling technology for modern wireless communication systems that need to adapt to tremendous growth in transmission rate demand and network scale. To achieve the full benefits of MIMO technology, there is a strong demand for computationally efficient receiver designs, considering the increasing number of antennas used. Several suboptimal linear detectors, such as the zero-forcing and linear minimum mean square error (LMMSE) algorithms, have been desirable among existing MIMO detectors [2] because of their low computational cost. However, compared with the maximum likelihood (ML) detector, substantial performance loss has restricted the application of linear detectors in future communication systems. By contrast, the powerful sphere decoder (SD) [3] promises performance equivalent to the optimal ML. However, it is constrained to MIMO systems with a limited number of antennas because of the exponential worst-case complexity [4]. Iterative detectors based on message passing (MP), specifically approximate MP (AMP) [5] and expectation propagation (EP) [6, 7, 8], have become promising strategies to approximate the ML detector with moderate complexity. AMP is favored in addressing large-scale MIMO detection problems because its complexity is only quadratic to the system size. However, AMP is Bayes-optimal only when the channel matrix follows an independent and identically distributed (i.i.d.) sub-Gaussian distribution [9], and it degrades significantly under realistic ill-conditioned channels. Orthogonal AMP (OAMP) [10] and vector AMP (VAMP) [11] are powerful strategies that have been developed to address the limitations of AMP. They have been proven to achieve the Bayes-optimal performance for the general unitarily-invariant channel matrices under the large system limit [12]. However, their performance tends to degrade in realistic finite-dimensional MIMO systems, which are the main focus of this paper.1 Footnote 1: In this paper, our focus is on moderately sized spatial-multiplexing MIMO systems, such as \(4\times 4\) and \(16\times 16\) configurations, rather than massive MIMO. This particular setup is commonly found in current wireless standards and has received significant attention and research efforts [2]. EP relaxes the constraint on the channel matrix and outperforms the AMP detector over a wide range of MIMO channels by factorizing the posterior belief with Gaussian distributions [13]. Furthermore, EP demonstrates superior performance in small- or medium-sized MIMO systems as compared to OAMP/VAMP. This is attributed to EP's utilization of element-wise variance instead of the scalar variance employed in OAMP/VAMP [14]. Iterative detection and decoding (IDD), or equivalently, MIMO turbo receiver, can be used to further improve detection accuracy, which is a common practice in current communication systems. Moreover, EP-based turbo receivers have been widely applied [15, 16, 17, 18]. However, EP detectors suffer a substantial gap with the ML performance because of the inaccuracy of Gaussian approximation in practical MIMO systems with high spatial correlation and strong interference. Recently, deep learning (DL) has demonstrated its remarkable capability of overcoming conventional challenges in wireless communications [19]. In particular, deep neural networks (NNs) offer the possibility to address existing gaps in iterative MIMO detectors and promise great performance and efficiency in receiver design [20, 21, 22, 23]. The authors of [20] developed a model-driven detection network (DetNet) by applying a projected gradient descent algorithm, and DetNet achieves the same accuracy as the AMP detector with enhanced robustness and lower complexity. However, DetNet requires substantial training data and performs poorly under high-order modulation or small-sized MIMO systems, thereby restricting its applications. Conventional MP-based detectors deteriorate severely in practical MIMO systems because of the failure of their prerequisites. An OAMP network (OAMPNet) was constructed in [21] to compensate for the performance loss utilizing DL. However, the OAMPNet experiences performance degradation in real-world channels with strong correlations [22]. More powerful NNs were introduced to enhance MP-based detectors by using highly parameterized models, including the MMNet in [22] and the recurrent equivariant MIMO detector in [23]. However, such schemes entail bulky detection networks with an excessive number of parameters to be trained. Furthermore, the EP detector was unfolded in [18, 24] to derive a detection network with a few trainable damping factors. In this way, fast convergence can be achieved, and inefficient hand-crafted tuning processes can be avoided. Graph NNs (GNNs) provide an advanced technique for dealing with graph-structured data, and they have been widely applied to address the inference tasks of wireless communications [25]. Recently, GNN-based MIMO detection [26, 27] has also attracted great attention because of the GNN's ability to enhance the MP solution by incorporating DL [28]. The authors of [26] modeled the MIMO detection problem by the pair-wise Markov random field (MRF) and solved the corresponding maximum _a posteriori_ (MAP) inference task by learning an enhanced MP algorithm based on a GNN. Furthermore, the authors of [27] developed a GNN-enhanced EP detector called GEPNet, which introduced GNN to improve the posterior distribution approximation and exhibited a significant performance advantage over the EP and state-of-the-art NN-based detectors in uncoded systems. However, the application of the GEPNet to turbo iterative processing is unexplored in [27], which is not a trivial issue due to the requirement for extrinsic information, rather than _a posteriori_ probability (APP), to ensure a stable convergence. Furthermore, the dense connections in the fully connected (FC) MRF model result in a GNN with a deal of redundancy, which hinders the efficient implementation of the detector. In this paper, we develop an extrinsic GNN-aid EP network (EXT-GEPNet) for MIMO turbo receivers. We go beyond the design of GEPNet for uncoded systems [27] and follow the key idea of EP-based turbo receivers [15, 16, 17, 18] to construct the turbo structure for GEPNet with soft inputs and soft outputs, enabling sufficient utilization of _a priori_ information from the channel decoder to improve a _posteriori_ estimates. We observe that the original GEPNet [27] fails to generate reliable extrinsic information when applied to turbo iterative receiving, leading to poor performance. Hence, we customize a training scheme inspired by [29] to construct an EXT-GEPNet detector that satisfies the turbo principle [30] of forwarding extrinsic information. The training scheme addresses the limitations of the original GEPNet and establishes a fine-tuned EXT-GEPNet that can be integrated into the developed turbo structure to realize IDD with great flexibility and remarkable performance. We also reduce the computational complexity of the original GEPNet by designing an edge pruning scheme to simplify the GNN operations to a large extent while still maintaining excellent performance. The contributions of this paper are summarized as follows: * _Design of an EXT-GEPNet-based turbo receiver._ Through intensive simulation studies, we discovered that the original GEPNet cannot generate desired extrinsic outputs via the conventional strategy used in the MAP detector. To address this issue, we design an open-loop training scheme to derive an EXT-GEPNet that produces the extrinsic outputs required by the turbo procedure. This scheme is formulated based on a preliminary NN and the requirement for extrinsic information [30], i.e., not coupling with the priors, to obtain target extrinsic log-likelihood ratio (LLR) samples as labels for training the final EXT-GEPNet. The fine-tuned network can be plugged into the developed turbo structure to construct the EXT-GEPNet-based turbo receiver, which effectively overcomes the correlation problems when using the original GEPNet by preventing the same information from being counted twice. The proposed scheme is more flexible than directly training through the IDD procedure, as it does not rely on the choice of channel codes. * _Edge pruning to reduce the complexity._ To relieve the complexity of the MP process, we remove redundant edges in the latent FC graph of the GNN. Unlike existing NN pruning schemes that simply drop the network's weights with small magnitudes [28, 31], we perform edge pruning utilizing the correlation information among the to-be-estimated variables inherent in the EP iterations, inspired by [32]. The proposed network can complete the detection with lower computational cost and even achieve performance gain after pruning due to the reduction of ineffective connections and avoidance of overfitting. * _Comprehensive performance and complexity evaluation._ We validate our scheme with a wide range of numerical simulations under different scenarios that benchmark against a series of baselines. We also analyze the computational complexity of the proposed turbo receiver. Simulation results show that the proposed receiver has significant gain over the turbo approaches supported by EP and the original GEPNet and achieves comparable or even better performance with substantially lower running time than the single tree-search SD (STS-SD)-based receiver [3]. Furthermore, the proposed receiver adapts well to different channels and channel codes and exhibits robustness to channel estimation errors. _Notations:_ Boldface letters denote column vectors or matrices. \(\mathbf{A}^{T}\) and \(\mathbf{A}^{\dagger}=(\mathbf{A}^{T}\mathbf{A})^{-1}\mathbf{A}^{T}\) represent the transpose and pseudo-inverse of matrix \(\mathbf{A}\), respectively. \(\mathbf{I}_{N}\) is an identity matrix of size \(N\), and \(\mathbf{0}\) is a zero matrix. \(\delta(\cdot)\), \(\mathbb{E}[\cdot]\), and \(\|\cdot\|\) denote the Dirac delta function, expectation operation, and Euclidean norm, respectively. The set \([K]=1,2,\ldots,K\) contains all nonnegative integers up to \(K\). Finally, \(\mathcal{N}(z;\mu,\sigma^{2})\) denotes real-valued Gaussian random variables with mean \(\mu\) and variance \(\sigma^{2}\). ## II System Model and Algorithm Review In this section, the system model of the IDD problem is formulated first. Then, the EP-based turbo receiver is reviewed to obtain a clear understanding of the proposed scheme. ### _System Model_ The considered MIMO system on the basis of bit-interleaved coded modulation (BICM) consists of \(N_{\rm t}\) transmit (Tx) antennas and \(N_{\rm r}\) receive (Rx) antennas, with \(N_{\rm r}\geq N_{\rm t}\). Fig. 1 depicts the system with a block diagram, which includes a MIMO transmitter and a MIMO turbo receiver. At the transmitter, the channel encoder first converts the binary word, \({\bf b}\in\{0,1\}^{N_{\rm b}}\) with \(N_{\rm b}\) as the number of message bits in a word, into the coded bits with code rate \(R_{\rm c}=N_{\rm b}/N_{\rm c}\). Then, interleaving is performed to the coded bits, thereby yielding the codeword \({\bf c}\) of length \(N_{\rm c}\). The codeword \({\bf c}\) is then partitioned into \(N_{\rm s}\) subvectors of length \(N_{\rm t}\tilde{Q}\) and modulated into symbol vectors with a complex quadrature amplitude modulation (QAM) constellation \(\bar{\mathcal{A}}\) of size \(|\bar{\mathcal{A}}|=\tilde{M}\), where \(\tilde{Q}\) is the number of bits per complex symbol and \(\tilde{Q}=\log_{2}\tilde{M}\). The symbol vectors are transmitted over the wireless channel, which is assumed to be unchanged in a time slot, and the received real-valued signal can be represented as \[{\bf y}={\bf H}{\bf x}+{\bf w}, \tag{1}\] where \({\bf x}\in\mathcal{A}^{K}\) with \(K=2N_{\rm t}\) is the equivalent real-valued transmitted vector in a time slot. The real-valued constellation \(\mathcal{A}\) has a cardinality of \(|\mathcal{A}|\triangleq M=\sqrt{\tilde{M}}\) and average energy of \(E_{\rm s}\). The channel matrix \({\bf H}\in\mathbb{R}^{N\times K}\), with \(N=2N_{\rm r}\) and its columns \({\bf h}_{k},k\in[K]\) normalized to unit energy, is supposed to be known at the receiver without special illustrations. \({\bf w}\) is the additive white Gaussian noise vector with zero mean and element-wise noise variance \(\sigma_{w}^{2}\). The posterior probability density function (PDF) of the transmitted symbol vector \({\bf x}\) given the observations \({\bf y}\) yields \[p({\bf x}|{\bf y})=\frac{p({\bf y}|{\bf x})p({\bf x})}{p({\bf y})}\propto \underbrace{\mathcal{N}\left({\bf y};{\bf H}{\bf x},\sigma_{w}^{2}{\bf I}_{N} \right)}_{p({\bf y}|{\bf x})}\underbrace{\prod_{k=1}^{K}p_{\rm A1}(x_{k})}_{p( {\bf x})}, \tag{2}\] where \(p_{\rm A1}(x_{k})\) is the _a priori_ PDF of \(x_{k}\). In the first turbo iteration (TI), the prior is initialized as \(p_{\rm A1}^{(1)}(x_{k})=\frac{1}{M}\sum_{x\in\mathcal{A}}\delta(x_{k}-x)\) assuming equiprobable transmitted symbols. The direct calculation of \(p({\bf x}|{\bf y})\) involves a high-dimensional integral, which is generally intractable. Therefore, Bayesian inference techniques (e.g., EP) are commonly used to compute an approximation \(q({\bf x})\) for the optimal solution. The IDD technique, which is referred to as the turbo receiver in this paper, can improve the accuracy of the approximation further. As illustrated in Fig. 1, the soft-input soft-output signal detector and channel decoder in the turbo structure iteratively exchange reliability information on the same set of coded bits in the form of LLRs. These LLRs correspond to extrinsic probabilities to ensure the convergence and stability of the receiver. Moreover, the improved prior information can be utilized when the turbo procedure begins, that is, \(p_{\rm A1}^{(\iota)}(x_{k})\) can be constructed on the basis of the feedback from the channel decoder instead of the equiprobable assumption, where \(\iota\) is the index of the TI. The iterative process proceeds for a maximum number of \(I\) iterations and finally outputs the estimated message bits \(\hat{\bf b}\). The extrinsic LLR for \(c_{k,i}\), which is the \(i\)-th bit mapped to symbol \(x_{k}\), can be computed on the basis of the extrinsic PDF \(p_{\rm E1}^{(\iota)}(x_{k}|{\bf y})\) estimated by the detector in each TI as \[L_{\rm E1}^{(\iota)}(c_{k,i})\triangleq\log\frac{\sum_{x_{k}\in\mathcal{A}_{k,i}^{(1)}}p_{\rm E1}^{(\iota)}(x_{k}|{\bf y})}{\sum_{x_{k}\in\mathcal{A}_{k,i} ^{(0)}}p_{\rm E1}^{(\iota)}(x_{k}|{\bf y})}, \tag{3}\] where \(\mathcal{A}_{k,i}^{(1)}\) and \(\mathcal{A}_{k,i}^{(0)}\) denote the subsets of constellation \(\mathcal{A}\), in which \(c_{k,i}\) is equal to 1 and 0, respectively. The vector \({\bf L}_{\rm E1}^{(\iota)}\), which contains all extrinsic LLR values from the detector, is further de-interleaved to derive \({\bf L}_{\rm A2}^{(\iota)}\), which is delivered to the channel decoder as the _a priori_ LLRs. The decoder computes the extrinsic LLRs on the coded bits, indicated as \({\bf L}_{\rm E2}^{(\iota)}\), utilizing the _a priori_ LLRs \({\bf L}_{\rm A2}^{(\iota)}\). For the \((\iota+1)\)-th TI, the extrinsic LLRs generated by the decoder are interleaved to derive \({\bf L}_{\rm A1}^{(\iota+1)}\), sent back to the detector, and mapped again to the updated _a priori_ PDF: \[p_{\rm A1}^{(\iota+1)}(x_{k})=\prod_{i=1}^{Q}\frac{\exp\left(c_{k,i}L_{\rm A1}^ {(\iota+1)}(c_{k,i})\right)}{1+\exp\left(L_{\rm A1}^{(\iota+1)}(c_{k,i}) \right)}, \tag{4}\] where \(Q=\tilde{Q}/2\) denotes the number of bits in a real-valued symbol. ### _EP-based Turbo Receiver_ EP is a Bayesian inference method that approximates the desired distribution by a function within the exponential families [33].2 Specifically, the EP-based MIMO detector computes a Gaussian approximation \(q({\bf x})\) for the posterior belief \(p({\bf x}|{\bf y})\) in (2), which is achieved by replacing the non-Gaussian Fig. 1: Block diagram of a MIMO system based on BICM [3]. The MIMO turbo receiver iteratively exchanges soft information between the MIMO detector, which combines model-based algorithms with NNs, and the channel decoder. factors (discrete priors) in (2) with unnormalized Gaussians iteratively: \[q^{(\iota,t)}(\mathbf{x})\propto \mathcal{N}(\mathbf{y};\mathbf{H}\mathbf{x},\sigma_{w}^{2}\mathbf{I }_{N})\cdot\prod_{k=1}^{K}\exp\big{(}\gamma_{k}^{(\iota,t-1)}x_{k}-\frac{1}{2} \lambda_{k}^{(\iota,t-1)}x_{k}^{2}\big{)}\] \[\propto \mathcal{N}\big{(}\mathbf{x};\mathbf{H}^{\dagger}\mathbf{y}, \sigma_{w}^{2}(\mathbf{H}^{T}\mathbf{H})^{-1}\big{)}\] \[\cdot\mathcal{N}\big{(}\mathbf{x};(\boldsymbol{\lambda}^{(\iota,t -1)})^{-1}\boldsymbol{\gamma}^{(\iota,t-1)},(\boldsymbol{\lambda}^{(\iota,t-1) })^{-1}\big{)}\] \[\propto \mathcal{N}\big{(}\mathbf{x};\boldsymbol{\mu}^{(\iota,t)}, \boldsymbol{\Sigma}^{(\iota,t)}\big{)}, \tag{5}\] where the superscript \((\iota,t)\) denotes the \(t\)-th EP iteration within the \(\iota\)-th TI. \(\gamma_{k}^{(\iota,t)}\in\mathbb{R}\) and \(\lambda_{k}^{(\iota,t)}\in\mathbb{R}^{+}\) denote the natural parameters of the approximating function, constituting the natural mean vector \(\boldsymbol{\gamma}^{(\iota,t)}=[\gamma_{1}^{(\iota,t)},\dots,\gamma_{K}^{( \iota,t)}]^{T}\) and precision matrix \(\boldsymbol{\lambda}^{(\iota,t)}=\mathrm{diag}([\lambda_{1}^{(\iota,t)}, \dots,\lambda_{K}^{(\iota,t)}])\)[33]. In the first TI, parameters \(\gamma_{k}^{(\iota,t)}\) and \(\lambda_{k}^{(\iota,t)}\) are initialized as \(\gamma_{k}^{(1,0)}=0\) and \(\lambda_{k}^{(1,0)}=1/E_{\mathrm{s}},k\in[K]\), respectively, and then iteratively updated following the moment matching condition as \(\mathbb{E}_{q(\mathbf{x})}=\mathbb{E}_{p(\mathbf{x}|\mathbf{y})}\)[7]. The mean \(\boldsymbol{\mu}^{(\iota,t)}\) and covariance \(\boldsymbol{\Sigma}^{(\iota,t)}\) of \(q^{(\iota,t)}(\mathbf{x})\) are computed using the Gaussian product lemma [35] as follows: \[\boldsymbol{\Sigma}^{(\iota,t)} =\big{(}\sigma_{w}^{-2}\mathbf{H}^{T}\mathbf{H}+\boldsymbol{ \lambda}^{(\iota,t-1)}\big{)}^{-1}, \tag{6a}\] \[\boldsymbol{\mu}^{(\iota,t)} =\boldsymbol{\Sigma}^{(\iota,t)}\big{(}\sigma_{w}^{-2}\mathbf{H} ^{T}\mathbf{y}+\boldsymbol{\gamma}^{(\iota,t-1)}\big{)}. \tag{6b}\] Moreover, EP calculates the marginal distribution of \(q^{(\iota,t)}(\mathbf{x})\) by viewing the covariance \(\boldsymbol{\Sigma}^{(\iota,t)}\) as a diagonal matrix to reduce complexity, that is, \(q^{(\iota,t)}(x_{k})\propto\mathcal{N}(x_{k};\mu_{k}^{(\iota,t)},\Sigma_{k}^{ (\iota,t)}),k\in[K]\), where \(\mu_{k}^{(\iota,t)}\) is the \(k\)-th element of \(\boldsymbol{\mu}^{(\iota,t)}\), and \(\Sigma_{k}^{(\iota,t)}\) is the \(k\)-th element of the main diagonal in \(\boldsymbol{\Sigma}^{(\iota,t)}\). This strategy, which uses the product of independent Gaussian functions to approximate \(p(\mathbf{x}|\mathbf{y})\)[27], ignores the off-diagonal elements in the covariance \(\boldsymbol{\Sigma}^{(\iota,t)}\) and results in information loss. Fig. 2 presents the block diagram of the EP detector at the \(\iota\)-th TI, which contains a LMMSE module and a nonlinear Posterior module and bears a turbo structure. The LMMSE module is dedicated to the computation of \(\boldsymbol{\mu}^{(\iota,t)}\) and \(\boldsymbol{\Sigma}^{(\iota,t)}\) in (6). Then, the extrinsic marginal distribution is derived by the "ext" operation after LMMSE to decorrelate the output and the input as follows: \[q_{e}^{(\iota,t)}\left(x_{k}\right) =\frac{q^{(\iota,t)}\left(x_{k}\right)}{\exp\big{(}\gamma_{k}^{( \iota,t-1)}x_{k}-\frac{1}{2}\lambda_{k}^{(\iota,t-1)}x_{k}^{2}\big{)}}\] \[\propto \tag{7}\] where \(x_{e,k}^{(\iota,t)}\) and \(v_{e,k}^{(\iota,t)}\) are the \(k\)-th element of the mean vector \(\mathbf{x}_{e}^{(\iota,t)}\) and the main diagonal in the covariance matrix \(\mathbf{V}_{e}^{(\iota,t)}\), respectively, and we yield \[v_{e,k}^{(\iota,t)} =\frac{\Sigma_{k}^{(\iota,t)}}{1-\Sigma_{k}^{(\iota,t)}\lambda_{k }^{(\iota,t-1)}}, \tag{8a}\] \[x_{e,k}^{(\iota,t)} =v_{e,k}^{(\iota,t)}\left(\frac{\mu_{k}^{(\iota,t)}-\gamma_{k}^{( \iota,t-1)}}{\Sigma_{k}^{(\iota,t)}-\gamma_{k}^{(\iota,t-1)}}\right). \tag{8b}\] Subsequently, the mean vector \(\mathbf{x}_{e}^{(\iota,t)}\) and diagonal covariance matrix \(\mathbf{V}_{e}^{(\iota,t)}\) of the extrinsic distribution are delivered to the Posterior module for further processing. The Posterior module combines the _a priori_ PDF \(p_{\mathrm{A}1}^{(\iota)}(x_{k})\) with the extrinsic distribution to derive the estimated _a posteriori_ distribution as [15, 16, 17, 18]: \[\hat{p}^{(\iota,t)}(x_{k})\propto q_{e}^{(\iota,t)}(x_{k})p_{\mathrm{A}1}^{( \iota)}(x_{k}), \tag{9}\] where the _a priori_ PDF \(p_{\mathrm{A}1}^{(\iota)}(x_{k})\) is uniform for \(\iota=1\) and non-uniform for \(\iota\geq 2\) mapped by the _a priori_ LLRs \(\mathbf{L}_{\mathrm{A}1}^{(\iota)}\) from the channel decoder according to (4). The _a posteriori_ mean \(\hat{x}_{k}^{(\iota,t)}\) and variance \(v_{k}^{(\iota,t)}\) are computed as: \[\hat{x}_{k}^{(\iota,t)} =\sum_{a_{m}\in\mathcal{A}}a_{m}\hat{p}^{(\iota,t)}(x_{k}=a_{m}), \tag{10a}\] \[v_{k}^{(\iota,t)} =\sum_{a_{m}\in\mathcal{A}}\big{(}x_{k}-\hat{x}_{k}^{(\iota,t)} \big{)}^{2}\hat{p}^{(\iota,t)}(x_{k}=a_{m}). \tag{10b}\] \(T\) is denoted as the number of EP iterations, and the pair \((\boldsymbol{\gamma}^{(\iota,t)},\boldsymbol{\lambda}^{(\iota,t)})\) is updated when \(t<T\) so that \[\prod_{k=1}^{K}\exp\big{(}\gamma_{k}^{(\iota,t)}x_{k}-\frac{1}{2} \lambda_{k}^{(\iota,t)}x_{k}^{2}\big{)}\] \[\propto \mathcal{N}\big{(}\mathbf{x};(\boldsymbol{\lambda}^{(\iota,t)})^{ -1}\boldsymbol{\gamma}^{(\iota,t)},(\boldsymbol{\lambda}^{(\iota,t)})^{-1}\big{)} \propto\frac{\mathcal{N}\big{(}\mathbf{x};\hat{\mathbf{x}}^{(\iota,t)},\, \mathbf{V}^{(\iota,t)}\big{)}}{\mathcal{N}\big{(}\mathbf{x};\mathbf{x}_{e}^{( \iota,t)},\mathbf{V}_{e}^{(\iota,t)}\big{)}}, \tag{11}\] where \(\hat{\mathbf{x}}^{(\iota,t)}=[\hat{x}_{1}^{(\iota,t)},\dots,\hat{x}_{K}^{( \iota,t)}]^{T}\) and \(\mathbf{V}^{(\iota,t)}=\mathrm{diag}([v_{1}^{(\iota,t)},\dots,v_{K}^{( \iota,t)}])\). A solution to (11) is: \[\boldsymbol{\lambda}^{(\iota,t)} =\left(\mathbf{V}^{(\iota,t)}\right)^{-1}-\left(\mathbf{V}_{e}^{( \iota,t)}\right)^{-1}, \tag{12a}\] \[\boldsymbol{\gamma}^{(\iota,t)} =\left(\mathbf{V}^{(\iota,t)}\right)^{-1}\hat{\mathbf{x}}^{( \iota,t)}-\left(\mathbf{V}_{e}^{(\iota,t)}\right)^{-1}\mathbf{x}_{e}^{( \iota,t)}, \tag{12b}\] which is implemented by the "ext" operation after the Posterior module. Notably, the update in (12a) can result in a negative \(\lambda_{k}^{(\iota,t)}\), which is unreasonable and should be discarded because \(\lambda^{(\iota,t)}\) is an inverse variance term. Therefore, we adopt the approach from [7] to ensure numerical stability, where we retain \(\lambda_{k}^{(\iota,t)}=\lambda_{k}^{(\iota,t-1)}\) and \(\gamma_{k}^{(\iota,t)}=\gamma_{k}^{(\iota,t-1)}\) when \(\lambda_{k}^{(\iota,t)}<0\). Additionally, we apply a damping technique [7, 33] to smooth the update by using a convex combination of the properties. The updated pair \((\mathbf{\gamma}^{(\iota,t)},\mathbf{\lambda}^{(\iota,t)})\) is delivered to the LMMSE module for the next EP iteration. The EP algorithm finishes when the maximum number of iterations \(T\) is reached. Extrinsic LLRs \(\mathbf{L}_{\mathrm{EI}}^{(\iota)}\) are demapped from the extrinsic distribution \(q_{\mathrm{e}}^{(\iota,T)}(x_{k})\) via (3) at the final iteration and delivered to the channel decoder, which outputs the estimated bits \(\hat{\mathbf{b}}\) when \(\iota=I\) or new _a priori_ LLRs \(\mathbf{L}_{\mathrm{A1}}^{(\iota+1)}\) when \(\iota<I\) for subsequent TIs. These _a priori_ LLRs are mapped to the updated _a priori_ PDF \(p_{\mathrm{A1}}^{(\iota+1)}(x_{k})\). The detector then computes the mean and variance of \(p_{\mathrm{A1}}^{(\iota+1)}(x_{k})\) as: \[\hat{x}_{\mathrm{A1},k}^{(\iota+1)} =\sum_{a_{m}\in\mathcal{A}}a_{m}p_{\mathrm{A1}}^{(\iota+1)}(x_{k }=a_{m}), \tag{14a}\] \[v_{\mathrm{A1},k}^{(\iota+1)} =\sum_{a_{m}\in\mathcal{A}}\left(x_{k}-\hat{x}_{\mathrm{A1},k}^{( \iota+1)}\right)^{2}p_{\mathrm{A1}}^{(\iota+1)}(x_{k}=a_{m}), \tag{14b}\] and updates the initial pair \((\mathbf{\gamma}^{(\iota+1,0)},\mathbf{\lambda}^{(\iota+1,0)})\) for EP at the \((\iota+1)\)-th TI as: \[\mathbf{\lambda}^{(\iota+1,0)}\leftarrow(\mathbf{V}_{\mathrm{A1}}^{( \iota+1)})^{-1},\quad\mathbf{\gamma}^{(\iota+1,0)}\leftarrow\mathbf{\lambda}^{(\iota+1,0)}\hat{\mathbf{x}}_{\mathrm{A1}}^{(\iota+1)}, \tag{15}\] with \(\hat{\mathbf{x}}_{\mathrm{A}}^{(\iota+1)}=[\hat{x}_{\mathrm{A1},1}^{(\iota+1) },\dots,\hat{x}_{\mathrm{A1},K}^{(\iota+1)}]^{T}\) and \(\mathbf{V}_{\mathrm{A1}}^{(\iota+1)}=\mathrm{diag}([v_{\mathrm{A1},1}^{(\iota +1)},\dots,v_{\mathrm{A1},K}^{(\iota+1)}])\). ## III EXT-GEPNet for Turbo Receiver This section presents the details of the proposed EXT-GEPNet. We first introduce the GEPNet detector [27] to improve the posterior distribution approximation of EP. Then, we present the designed turbo structure for GEPNet and the customized training scheme used to derive EXT-GEPNet. Finally, we introduce the edge pruning method to simplify the GNN operations. ### _GNN-Enhanced EP Detector_ EP factorizes the target APPs with Gaussian distributions to avoid intractable calculations. However, the Gaussian approximation of the posterior belief becomes inaccurate in practical scenarios. For example, in a MIMO system with the number of Tx antennas close to that of the Rx antennas, where strong interference is extremely detrimental, the residual noise that can be viewed as Gaussian sharply decreases. This situation leads to a significant performance gap between the EP detector and the ML detector. To solve this limitation of EP, the GEPNet detector [27] was proposed, as shown in Fig. 3 with black solid lines. The GEPNet structure is obtained by first unfolding the EP iterations into layers and then adding a GNN module to each layer while retaining the LMMSE and Posterior modules.3 The GNN module is inserted between the two original EP's modules and is used to provide an improved estimated distribution of the transmitted symbols over the constellation \(\mathcal{A}\) based on the prior knowledge \(q_{\mathrm{e}}^{(\iota)}(x_{k})\) from (7).4 Subsequently, we elaborate the GNN module [26]. Footnote 3: We denote an EP iteration as a layer in the rest of this paper for consistency. Footnote 4: The GEPNet employs the same model parameters for different turbo iterations in the proposed turbo receiver. Therefore, in this section, we omit the superscript \(\iota\) of the turbo iteration index for simplicity. The GNN module (Fig. 4) provides a framework used for capturing the structured dependency of the transmitted variables \(\mathbf{x}\) by combining DL into the MP on the pair-wise MRF model [26], where each variable is denoted as a node, and each pair of two nodes is linked by an edge. The nodes and edges in the MRF are represented by circles and squares in Fig. 4, respectively. The prior knowledge \(q_{\mathrm{e}}^{(\iota)}(x_{k})\), which is characterized by the mean \(x_{\mathrm{e},k}^{(\iota)}\) and variance \(v_{\mathrm{e},k}^{(\iota)}\), is incorporated into the node attribute as \(\mathbf{a}_{k}^{(\iota)}=[x_{\mathrm{e},k}^{(\iota)},v_{\mathrm{e},k}^{(\iota )}]\). Furthermore, the nodes and edges of the MRF are both characterized by their feature vectors, which are denoted as \(\mathbf{u}_{k}^{(l)}\) and \(\mathbf{f}_{jk}\), respectively, where \(l\) is the iteration index of the MP and \(j\in[K]\backslash\{k\}\). The node feature vector \(\mathbf{u}_{k}^{(l)}\) of size \(N_{\mathrm{u}}\) encodes the probabilistic information about the variable \(x_{k}\) that corresponds to the self potential of the MRF, given by [26, Eq. (5)] as \[\phi\left(x_{k}\right)=\exp\left(\frac{1}{\sigma_{w}^{2}}\mathbf{\mathrm{y}}^{T} \mathbf{h}_{k}x_{k}-\frac{1}{2\sigma_{w}^{2}}\mathbf{h}_{k}^{T}\mathbf{h}_{k}x _{k}^{2}\right)p_{\mathrm{A1}}\left(x_{k}\right).\] In particular, \(\mathbf{u}_{k}^{(l)}\) is initialized as \(\mathbf{u}_{k}^{(0)}=\mathbf{W}_{1}\cdot\left[\mathbf{y}^{T}\mathbf{h}_{k}, \mathbf{h}_{k}^{T}\mathbf{h}_{k},\sigma_{w}^{2}\right]^{T}+\mathbf{b}_{1}\) and iteratively updated in the MP process, where \(\mathbf{W}_{1}\in\mathbb{R}^{N_{\mathrm{u}}\times 3}\) is a learnable weight matrix, and \(\mathbf{b}_{1}\in\mathbb{R}^{N_{\mathrm{u}}}\) is a learnable bias vector. The edge feature vector \(\mathbf{f}_{jk}=[\mathbf{h}_{k}^{T}\mathbf{h}_{j},\sigma_{w}^{2}]\) contains the pair potential between nodes \(x_{j}\) and \(x_{k}\), given by [26, Eq. (6)] as \[\psi\left(x_{k},x_{j}\right)=\exp\left(-\frac{1}{\sigma_{w}^{2}}\mathbf{h}_{k} ^{T}\mathbf{h}_{j}x_{k}x_{j}\right).\] Three steps are involved in the MP process of the GNN: propagation, aggregation, and readout. The first two steps are implemented at each MP iteration \(l\), whereas the readout is conducted at the final iteration \(L\) to make the inference. #### Iii-A1 Propagation The edge between any pair of the variable nodes \(x_{k}\) and \(x_{j}\) first concatenates its own feature vector \(\mathbf{f}_{jk}\) with the incoming feature vectors \(\mathbf{u}_{k}^{(l-1)}\) and \(\mathbf{u}_{j}^{(l-1)}\). Then, the concatenated feature is delivered to a multi-layer perceptron (MLP) for message encoding, and the corresponding output message \(\mathbf{m}_{jk}^{(l)}\) is \[\mathbf{m}_{jk}^{(l)}=\mathcal{M}(\mathbf{u}_{k}^{(l-1)},\mathbf{u}_{j}^{(l-1)}, \mathbf{f}_{jk}), \tag{16}\] where \(\mathcal{M}\) denotes the operation of the MLP, which has two hidden layers of sizes \(N_{\mathrm{h1}}\) and \(N_{\mathrm{h2}}\) with the rectifier linear unit (ReLU) as the activation function and an output layer of size \(N_{\mathrm{u}}\). Finally, the message \(\mathbf{m}_{jk}^{(l)}\) is sent to the variable node \(x_{k}\), as shown in Fig. 4. #### Iii-A2 Aggregation The aggregation at node \(x_{k}\) is carried out by summing the incoming messages \(\mathbf{m}_{jk}^{(l)}\) from the connected nodes \(x_{j}\) and then concatenating the sum with the node attribute \(\mathbf{a}_{k}^{(l)}\), given by \(\mathbf{m}_{k}^{(l)}=\big{[}\sum_{j\in[K]\backslash\{k\}}\mathbf{m}_{jk}^{(l)}, \mathbf{a}_{k}^{(l)}\big{]}\). Subsequently, a gated recurrent unit (GRU) \(\mathcal{U}\)[36] is used to update the node feature vector \(\mathbf{u}_{k}^{(l)}\) by incorporating the concatenated message \(\mathbf{m}_{k}^{(l)}\): \[\mathbf{g}_{k}^{(l)} =\mathcal{U}(\mathbf{g}_{k}^{(l-1)},\mathbf{m}_{k}^{(l)}), \tag{17a}\] \[\mathbf{u}_{k}^{(l)} =\mathbf{W}_{2}\cdot\mathbf{g}_{k}^{(l)}+\mathbf{b}_{2}, \tag{17b}\] where \(\mathbf{g}_{k}^{(l)}\in\mathbb{R}^{N_{\mathrm{h1}}}\) and \(\mathbf{g}_{k}^{(l-1)}\in\mathbb{R}^{N_{\mathrm{h1}}}\) denote the current and previous hidden states, respectively. The single-layer NN with the learnable parameters \(\mathbf{W}_{2}\in\mathbb{R}^{N_{\mathrm{a}}\times N_{\mathrm{h1}}}\) and \(\mathbf{b}_{2}\in\mathbb{R}^{N_{\mathrm{a}}}\) in (17b) is used to derive the output feature, which is delivered to the neighboring propagation module for the next MP iteration. #### Iii-A3 Readout The node feature vectors \(\mathbf{u}_{k}^{(L)}\) are sent to the readout module, which contains a MLP followed by the SoftMax function, after \(L\) rounds of MP and yields \[\mathbf{z}_{k} =\mathcal{R}(\mathbf{u}_{k}^{(L)}), \tag{18a}\] \[q_{G}^{(t)}(x_{k}=a_{m}) =\frac{\exp(z_{k,a_{m}})}{\sum_{a_{m}\in\mathcal{A}}\exp(z_{k,a_{ m}})},a_{m}\in\mathcal{A}, \tag{18b}\] where \(\mathcal{R}\) is the MLP with two hidden layers of sizes \(N_{\mathrm{h1}}\) and \(N_{\mathrm{h2}}\) activated by ReLU and an output layer of size \(M\), which is the cardinality of the real-valued constellation \(\mathcal{A}\). The SoftMax process in (18b) maps the unnormalized output \(\mathbf{z}_{k}\) of \(\mathcal{R}\) into the estimated marginal distribution \(q_{G}^{(t)}(x_{k})\) of the discrete variable \(x_{k}\) for the \(t\)-th layer of GEPNet. Moreover, the final GRU hidden state and node feature vector in the current GEPNet layer assigned to the next layer as the starting point, that is, \(\mathbf{g}_{k}^{(0)}\leftarrow\mathbf{g}_{k}^{(L)}\) and \(\mathbf{u}_{k}^{(0)}\leftarrow\mathbf{u}_{k}^{(L)},k\in[K]\). Notably, the weight parameters of \(\mathcal{M}\), \(\mathcal{U}\), and \(\mathcal{R}\) are shared across different nodes or edges. These parameters can be trained with supervised learning to improve EP's estimations characterized only by the mean and variance of a Gaussian function. This allows the GEPNet detector to calculate the _a posteriori_ distribution in (9) using the improved estimation \(q_{G}^{(t)}(x_{k})\) from the GNN instead of \(q_{e}^{(t)}(x_{k})\). Hard output \(\mathbf{\hat{x}}^{(T)}\) can then be derived based on the _a posteriori_ estimation using (10) in the last layer of the network [27]. However, designing the GEPNet-based turbo receiver is a non-trivial task due to the requirement for extrinsic information, rather than _a posteriori_ information. ### _Proposed EXT-GEPNet_ #### Iii-B1 Motivation Fig. 3 presents the proposed turbo structure for GEPNet. Similar to the EP-based turbo receiver described in Sec. II-B, the GEPNet detector in the designed turbo structure integrates priors from the channel decoder. In Fig. 3, the blue dashed lines represent the priors from the channel decoder. These priors are used in both the computation of the initial pair, as shown in (15), and in the improved _a posteriori_ estimation at the Posterior module as: \[\hat{p}_{G}^{(t)}(x_{k})\propto q_{G}^{(t)}(x_{k})p_{\mathrm{A1}}(x_{k}), \tag{19}\] which combines the outputs of the GNN with the _a priori_ PDF. The estimated _a posteriori_ LLRs \(\hat{\mathbf{L}}_{\mathrm{APP}}\) can be demapped from the estimated APPs \(\hat{p}_{G}^{(T)}(x_{k})\) at the last layer of GEPNet. As shown in Fig. 3, the stability of the IDD process depends on GEPNet's ability to provide the decoder with extrinsic LLRs \(\mathbf{L}_{\mathrm{E1}}\) that follow the turbo principle [30]. This means that the extrinsic LLRs should only contain new information and should not count the same information twice. Recall that the MAP detector produces extrinsic LLRs by subtracting the _a priori_ LLRs \(L_{\mathrm{A1}}(c_{j})\) from the _a posteriori_ LLRs: \[L_{\mathrm{E1}}(c_{j})=\log\frac{p\left(c_{j}=1|\mathbf{y}\right)}{p\left(c_{ j}=0|\mathbf{y}\right)}-L_{\mathrm{A1}}(c_{j}). \tag{20}\] However, equipping GEPNet with this strategy leads to poor performance, as shown in the simulation results. This phenomenon can be attributed to the fact that GEPNet's approximation to the MAP detector may not be accurate, and directly subtracting the _a priori_ LLRs may not completely eliminate the impact of the priors [29]. This results in unreliable LLRs that deviate from the desired extrinsic LLRs [37]. To address this issue, we customize a training scheme for GEPNet to enable the network to output LLRs that do not couple with the Fig. 3: Visual representation of the proposed turbo receiver structure for GEPNet at the \(\iota\)-th turbo iteration. The black solid lines depict the unfolding structure of GEPNet, while the blue dash lines depict the interaction with the channel decoder (Dec). priors. We also utilize a decoder LLR preprocessing strategy to further stabilize the proposed receiver. The details of the resultant turbo receiver scheme are revealed in the following subsection. #### Iv-B2 Three-step Training of EXT-GEPNet Fig. 5 presents the training procedure for the EXT-GEPNet, which is divided into three steps, which are elaborated as follows. **Step 1: Train an APP-based GEPNet.** We first train a GEPNet detector to generate _a posteriori_ LLRs, indicated by APP-GEPNet in Fig. 5(a). In this step, a randomly generated pair \(\{\mathbf{x}^{[d]},\mathbf{y}^{[d]},\mathbf{H}^{[d]},\sigma_{e}^{[d]}\}\) and a bitwise _a priori_ LLR vector \(\mathbf{L}_{\mathrm{A1}}^{[d]}\) of size \(J=KQ\) form a training sample, where \(d\) is the sample index. The transmitted symbol vector \(\mathbf{x}^{[d]}\) is the label. The received signal \(\mathbf{y}^{[d]}\), the channel state information (CSI) \(\{\mathbf{H}^{[d]},\sigma_{w}^{[d]}\}\), and the LLR vector \(\mathbf{L}_{\mathrm{A1}}^{[d]}\) are the input features. The elements of \(\mathbf{L}_{\mathrm{A1}}^{[d]}\) are generated according to the bits \(\mathbf{c}^{[d]}\) demapped from \(\mathbf{x}^{[d]}\) and are assumed to be Gaussian distributed, \(L_{\mathrm{A1}}(c_{j}^{[d]})\sim\mathcal{N}(L;(2c_{j}^{[d]}-1)\mu_{A},2\mu_{A})\), where \(c_{j}^{[d]}\) is the corresponding \(j\)-th bit. The mean of the Gaussian distribution is \(\mu_{A}=J_{A}^{-1}(I_{A})\), where \(J_{A}(\mu)\triangleq 1-\mathbb{E}_{\mathcal{N}(L;\mu,2\mu)}[\log_{2}(1+ \mathrm{e}^{-L})]\) is a monotonically increasing function. The parameter \(I_{A}\in[0,1]\) denotes the average mutual information between \(\mathbf{L}_{\mathrm{A1}}^{[d]}\) and \(\mathbf{c}^{[d]}\) and characterizes the quality of the _a priori_ LLRs [38]. This idea is inspired by the simulation of the extrinsic information transfer chart for IDD [38] and allows for an open-loop training without relying on the channel coding scheme. To supply the network with different _a priori_ LLR distributions in the training stage for the sake of generalization, \(I_{A}\) can be uniformly selected from [0,1]. However, to avoid the tedious numerical calculations from \(I_{A}\) to \(\mu_{A}\), we set a look-up table (LUT) between \(I_{A}\) and \(\mu_{A}\) in advance for a predefined set \(I_{A}\in\{0,0.33,0.67,0.78,0.89,0.94,0.99,1\}\)[18]. This set already reflects the increasing trend of the confidence level, i.e., the absolute value of LLRs from the channel decoder, as the number of TIs increases. Thus, in our implementation, \(I_{A}\) is randomly selected from the predefined set for each vector \(\mathbf{L}_{\mathrm{A1}}^{[d]}\) and converted into the corresponding \(\mu_{A}\) based on the LUT. Then, the _a priori_ LLR vector is derived according to the given bits in the training sample, the mean value \(\mu_{A}\), and the Gaussian distribution. Moreover, we train the network with the cross-entropy (CE) loss function according to [27]: \[\mathcal{L}_{1}=-\frac{1}{D}\sum_{d=1}^{D}\sum_{k=1}^{K}\sum_{a\in\mathcal{A}} \mathbb{I}_{x_{k}^{[d]}=a}\log\big{(}\hat{p}_{G}^{(T)}(x_{k}^{[d]}=a)\big{)}, \tag{21}\] where \(D\) is the number of samples in a training batch, \(x_{k}^{[d]}\) is the \(k\)-th element of \(\mathbf{x}^{[d]}\), and \(\mathbb{I}_{x_{k}^{[d]}=a}\) is the indicator function that takes the value one if \(x_{k}^{[d]}=a\) and zero otherwise, hence representing the training label. By backpropagating the error over this loss, the estimated APPs \(\hat{p}_{G}^{(T)}(x_{k})\) from the APP-GEPNet can gradually approach the true APPs of the transmitted symbols. Finally, \(\hat{p}_{G}^{(T)}(x_{k})\) can be demapped as the estimated _a posteriori_ LLRs \(\hat{\mathbf{L}}_{\mathrm{APP}}\). **Step 2: Generate extrinsic training LLRs from APP-GEPNet.** The extrinsic LLR in (20) can be rewritten as [30]: \[L_{\mathrm{E1}}(c_{j}) =\log\frac{\sum_{\forall\in c_{j}=1}p(\mathbf{y}|\mathbf{c})p_{ \mathrm{A1}}(\mathbf{c})}{\sum_{\forall\in c_{j}=0}p(\mathbf{y}|\mathbf{c})p_{ \mathrm{A1}}(\mathbf{c})}-L_{\mathrm{A1}}(c_{j})\] \[=\log\frac{\sum_{\forall\in c_{j}=1}p(\mathbf{y}|\mathbf{c})\prod_ {\forall i:\neq j}p_{\mathrm{A1}}(c_{i})}{\sum_{\forall\in c_{j}=0}p(\mathbf{ y}|\mathbf{c})\prod_{\forall i:\neq j}p_{\mathrm{A1}}(c_{i})}. \tag{22}\] This equation demonstrates that the extrinsic LLR \(L_{\mathrm{E1}}(c_{j})\) is a function of the channel information and the priors \(L_{\mathrm{A1}}(c_{i}),\forall i\neq j\) and should be independent of \(L_{\mathrm{A1}}(c_{j})\)[30]. Special inputs are designed for the APP-GEPNet obtained in Step 1 so that the network can generate extrinsic LLRs that satisfy this criterion. First, a total of \(J\) modified _a priori_ LLR vectors \(\{\tilde{\mathbf{L}}_{\mathrm{A1}}^{(j)}\}_{j=1}^{J}\) are derived by setting the \(j\)-th (\(j\in[J]\)) element of the original _a priori_ LLR vector \(\mathbf{L}_{\mathrm{A1}}\) to zero, where the sample index \(d\) is omitted for ease of notation. Thus, the \(i\)-th element of the \(j\)-th modified vector \(\tilde{\mathbf{L}}_{\mathrm{A1}}^{(j)}\) is defined as \[\tilde{L}_{\mathrm{A1}}^{(j)}(c_{i})=\left\{\begin{array}{ll}L_{\mathrm{A1}} (c_{i})&\text{if }i\neq j\\ 0&\text{if }i=j\end{array}\right.,i\in[J]. \tag{23}\] Next, the modified _a priori_ LLR vectors \(\{\tilde{\mathbf{L}}_{\mathrm{A1}}^{(j)}\}_{j=1}^{J}\), along with the same features \(\{\mathbf{y},\mathbf{H},\sigma_{w}\}\) corresponding to \(\mathbf{L}_{\mathrm{A1}}\) from Step 1, are used as the inputs for the APP-GEPNet to derive the extrinsic LLRs via \(J\) parallel inferences, as shown in Fig. 5(b). Specifically, the inputs are \(\tilde{\mathbf{L}}_{\mathrm{A1}}^{(j)}\) and \(\{\mathbf{y},\mathbf{H},\sigma_{w}\}\) when we target at the \(j\)-th modified vector. The LLR \(L_{\mathrm{E1}}^{(j)}(c_{j})\) from the corresponding output vector is hence independent of the initial prior \(L_{\mathrm{A1}}(c_{j})\) because \(\tilde{L}_{\mathrm{A1}}^{(j)}(c_{j})\) is assigned to zero. Therefore, \(L_{\mathrm{E1}}^{(j)}(c_{j})\) is exempted from coupling with the corresponding bit prior and satisfies the criterion for extrinsic LLR given in [30]. Finally, \(L_{\mathrm{E1}}^{(j)}(c_{j})\) is collected individually from each output vector according to Fig. 5(b) to form \(\mathbf{L}_{\mathrm{E1}}=\{L_{\mathrm{E1}}^{(j)}(c_{j})\}_{j=1}^{J}\). These LLRs are then used as the training labels for EXT-GEPNet in the next step. **Step 3: Train the final EXT-GEPNet.** In this step, we train the final EXT-GEPNet to learn the mapping from the input features to the extrinsic LLRs derived in Step 2. Notably, the EXT-GEPNet shares the same structure with the APP-GEPNet trained in Step 1, given by Fig. 3, whereas the difference lies in two aspects: First, for EXT-GEPNet, we use the outputs of the GNN \(q_{G}^{(T)}(x_{k})\) instead of the estimated APPs \(\hat{p}_{G}^{(T)}(x_{k})\) to derive the output LLRs \(\tilde{\mathbf{L}}_{\mathrm{E1}}\). This avoids coupling with the priors. Second, the objective of the training and the resultant model weights are different. Specifically, the loss function for training the EXT-GEPNet is \[\mathcal{L}_{2}=\frac{1}{D}\sum_{d=1}^{D}\sum_{j=1}^{J}l_{\mathrm{CE}}(c_{e,j}^ {[d]},\tilde{c}_{e,j}^{[d]}), \tag{24}\] where \(c_{e,j}^{[d]}\) represents a soft bit mapped from the extrinsic LLR sample \(L_{\mathrm{E1}}^{[d]}(c_{j})\) of Step 2, and \(\tilde{c}_{e,j}^{[d]}\) denotes the counterpart equivalent of the output LLR \(\tilde{L}_{\mathrm{E1}}^{[d]}(c_{j})\) of the target EXT-GEPNet: \[c_{e,j}^{[d]}=\frac{1}{1+\exp(-L_{\mathrm{E1}}^{[d]}(c_{j}))},\;\;\tilde{c}_{e,j }^{[d]}=\frac{1}{1+\exp(-\tilde{L}_{\mathrm{E1}}^{[d]}(c_{j}))}.\] Furthermore, \(l_{\mathrm{CE}}\) in (24) is given by \[l_{\mathrm{CE}}(c,\tilde{c})=-\big{(}c\log(\tilde{c})+(1-c)\log(1-\tilde{c}) \big{)}. \tag{25}\] Therefore, the output LLRs \(\tilde{\mathbf{L}}_{\mathrm{E1}}\) of the EXT-GEPNet can approximate the desired output \(\mathbf{L}_{\mathrm{E1}}\) from Step 2 by error backpropagation over the loss \(\mathcal{L}_{2}\), as shown in Fig. 5(c). The three-step training scheme of the EXT-GEPNet is summarized in Algorithm 1.5 Footnote 5: Note that the randomly generated data pairs \(\{\mathbf{y},\mathbf{H},\sigma_{w},\mathbf{L}_{\mathrm{A1}}\}\) in Step 1 are used throughout the three-step training scheme. No additional data pairs are generated specifically for training the EXT-GEPNet. **Remark 1**: _Once the EXT-GEPNet is trained, it can be deployed in the turbo structure of Fig. 3 and used for different TIs with the same model parameters, constructing the EXT-GEPNet-based turbo receiver. As the proposed method is an approximation to the MAP receiver, overestimation of the reliability can happen during the IDD procedure. This overestimation can result in instability in the detection process as the mean value of the closed-loop LLRs from the channel decoder would increase over iterations. We observed experimentally that scaling the decoder LLRs into the range that matches the range of the _a priori_ LLRs \(\mathbf{L}_{\mathrm{A1}}\) used in the training stage effectively overcomes the instability issues. Therefore, we utilize an adaptive LLR scaling method [29] involving three steps: First, we examine the training LLRs \(\mathbf{L}_{\mathrm{A1}}\) to determine the range \([-r,r]\) that \(\mathbf{L}_{\mathrm{A1}}\) fall into with a probability of \(p_{r}\approx 1\). Second, we search for the maximum absolute value \(r_{\epsilon}\) of the decoder LLRs in the current codeword at the end of each TI. Finally, we scale the decoder LLRs by \(r_{\epsilon}/r\) if \(r_{\epsilon}>r\); otherwise, we keep the LLRs unchanged. As a result, the decoder LLRs are controlled in an appropriate range. In this work, we set \(p_{r}\) as 0.97 and find that the decoder LLR preprocessing method further stabilizes the IDD procedure._ ### _Edge Pruning to Reduce Complexity_ The most computationally demanding operation of the GNN lies in the message propagation step through all the edges of the graph shown in Fig. 4. This step involves the execution of the edge MLPs, i.e., the function \(\mathcal{M}\) in (16), for \(K(K-1)\) times because the MRF defined in Section III-A is FC, i.e., with each pair of nodes connected by an edge. However, considerable redundant connections are observed in this FC graph [32]. We propose an edge pruning scheme based on the covariance matrix \(\mathbf{\Sigma}\) calculated by the LMMSE module of EP in (6a) to simplify the MP of the GNN while maintaining competitive performance.6\(\mathbf{\Sigma}\) is used as an indicator of the correlation weights between the neighboring nodes in the FC graph because the element \(\Sigma_{ij}\) (\(i,j\in[K]\)) reflects the covariance of variable nodes \(x_{i}\) and \(x_{j}\). \(\Sigma_{ij}\) can be normalized to derive the correlation coefficient as \(\rho_{ij}=\frac{\Sigma_{ij}}{\sqrt{\Sigma_{i}\Sigma_{jj}}}\in[0,1]\). A large correlation coefficient reflects the high structural dependency between the two connected variables. Hence, more attention should be paid to the edge between them. On the other hand, a small \(\rho_{ij}\) suggests the approximate independence of the two variables, where less information is required to exchange between them to recover the transmitted symbols. Hence, the proposed scheme prunes those edges with correlation coefficients \(\rho_{ij}\) that meet the following criterion: Footnote 6: The edge pruning is implemented in each layer of the network, and thus the index \((\iota,t)\) for \(\mathbf{\Sigma}\) is omitted. \[\rho_{ij}^{2}<\alpha\cdot\frac{1}{K-1}\sum_{k=1,k\neq j}^{K}{{\rho_{kj}}^{2}}, \tag{26}\] Fig. 5: Overview of the three-step training process for the EXT-GEPNet. where \(\alpha\) is a positive factor that controls the pruning threshold, and the edge pruning version of the GEPNet-based method is indicated by this factor hereafter. This strategy means that for a specific node \(x_{j}\), only the incoming edges with correlation coefficients \(\rho_{ij}\) larger than the average number of all connected edges (multiply by the pruning factor \(\alpha\)) are retained, and a large \(\alpha\) intuitively results in a significant proportion of removed edges. Therefore, edges with small correlation weights, i.e., low contributions to the inference of the target probability, are pruned. This process reduces the computational cost of GNN and saves computational resources for tuning the vital parts of the network. Notably, edges with low correlation weights exert minimal impact on the overall performance. Thus, the sparsely connected network after pruning can control the performance loss within the acceptable range or be even more generalizable, which is consistent with the findings in [31]. **Remark 2**: _Note that while the proposed edge pruning strategy shows potential in improving performance by mitigating overfitting, its underlying mechanism differs from conventional techniques aimed at alleviating overfitting, such as early stopping based on a separate validation dataset [39]. Specifically, the early stopping technique prevents overfitting by halting training when the validation loss begins to increase. In contrast, the proposed edge pruning focuses on reducing model complexity, thus enhancing generalization through the elimination of unnecessary connections. This approach makes the network more efficient during the inference stage, which cannot be achieved through early stopping alone._ ## IV Simulation Results The numerical results of the proposed schemes are presented in this section. First, the parameter settings are introduced. Second, the performance of the proposed schemes is evaluated under uncoded and coded MIMO systems. Finally, the computational complexity is analyzed. ### _Parameter Settings_ The simulated signal-to-noise ratio (SNR) of the system is defined as \(\text{SNR}=\frac{\mathbb{E}[|\mathbf{H_{\text{K}}}\mathbf{s}|^{2}]}{\mathbb{ E}[\|\mathbf{w}\|^{2}]}\). The i.i.d. Rayleigh and spatially correlated channels are used in the simulation. The Rayleigh MIMO channel \(\mathbf{H}\) has elements drawn from the Gaussian distribution \(\mathcal{N}(h_{ij};0,1/N)\), where \(h_{ij}\) is the \((i,j)\)-th element of \(\mathbf{H}\). The spatially correlated channel is characterized by the Kronecker model as \(\mathbf{H}=\mathbf{R}_{\text{r}}^{1/2}\mathbf{U}\mathbf{R}_{\text{t}}^{1/2}\), where \(\mathbf{U}\) is the i.i.d. Rayleigh channel matrix. \(\mathbf{R}_{\text{r}}\) and \(\mathbf{R}_{\text{t}}\), which have exponential elements that correspond to the same spatial correlation coefficient \(\varrho\)[40], represent the correlation matrices at the receiver and transmitter, respectively. To evaluate the error rate performance, the maximum number of transmitted bits is set as \(5\times 10^{7}\). For uncoded systems, the symbol error rate (SER) is used as the performance metric, while for the turbo receiver, the coded bit error rate (BER) and word error rate (WER) with \(N_{\text{b}}\) as the word length are used. For the coded MIMO systems, convolutional codes (CCs) and turbo codes are selected as the channel coding scheme. The CCs have generator polynomial \([133_{\text{o}}~{}171_{\text{o}}]\). Random interleaving and two code rates are adopted: one is code rate \(R_{\text{c}}=1/2\) with word length \(N_{\text{b}}=128\), and the other is \(R_{\text{c}}=5/6\) with \(N_{\text{b}}=800\). The word length is chosen so that the CCs can achieve effective error correction. The turbo codes have a code rate of \(R_{\text{c}}=1/2\) and word length \(N_{\text{b}}=1952\), and the interleaver follows the 3rd Generation Partnership Project Release 17 specification [41]. The channel decoder utilizes the BCJR algorithm [42] with 10 inner iterations. The hyperparameters of the GNN are set as \(N_{\text{h1}}=64,N_{\text{h2}}=32,N_{\text{u}}=8\), and \(L=2\). For the GEPNet detector in uncoded systems, the network is trained with the loss function \(\mathcal{L}_{1}\) in (21) but without the _a priori_ training LLRs. The training and validation sets contain 6,400,000 and 6,000 samples, respectively. Furthermore, during the three-step training scheme as described in Sec. III-B2, training and validation sets for APP-GEPNet in Step 1 have the same sizes as those for GEPNet in uncoded systems. Then, a total of 76,800 extrinsic LLR vectors \(\mathbf{L}_{\text{E1}}\) is generated in Step 2 to train EXT-GEPNet in Step 3. The objective is to minimize the loss function \(\mathcal{L}_{2}\) as described in (24). Although this dataset is relatively small, it significantly reduces the computational cost associated with generating the extrinsic training LLRs in Step 2. Through experiments, we have observed that this small dataset is sufficient for training the EXT-GEPNet to effectively learn the mapping from the input features to the extrinsic LLRs, resulting in excellent performance. All the considered networks are trained for 5,000 epochs with a batch size of 128. In our experiment, we utilize the Glorot normal initializer [43] to initialize the weights of the network. Notably, the APP-GEPNet trained in Step 1 can also serve as an initialization model for the EXT-GEPNet in Step 3. This initialization strategy can help accelerate the convergence speed of the training procedure. The SNR during training is set as a specific point \(\text{SNR}_{\text{train}}\). Moreover, the optimizer is selected as Adam with a learning rate of 0.001. Unless otherwise specified, we train and test the network under the same modulation scheme, antenna configurations, and channel model. For uncoded MIMO systems, we choose the damping factor for EP and GEPNet-based detectors as \(\beta=0.2\), as suggested in [7]. For coded MIMO systems, we follow the configurations of EP and double EP (DEP), which introduces EP in both the estimation of the posterior and the processing of the channel decoder's feedback to accelerate convergence, as proposed in [16] and [17], respectively. The number of layers in all the evaluated iterative detectors is set as \(T=5\). ### _Performance Analysis of Uncoded MIMO Detectors_ We compare the performance of the uncoded MIMO detectors under i.i.d. Rayleigh channels. Fig. 6 provides the SER performance under 16-QAM modulation with 4 \(\times\) 4 and 16 \(\times\) 16 MIMO configurations. The GEPNet detector and the edge pruning versions are compared with the conventional EP [7], the model-driven DL-based OAMPNet [21], and the optimal ML detectors. Fig. 6 reveals that the gain of GEPNet over EP and OAMPNet is remarkable. Moreover, the effect of edge pruning with different pruning factors is demonstrated. In the comparison, the pruning factor \(\alpha\) is set as 0.5, 1, 2, and 4, respectively. The number after \(\alpha\) of each SER curve of the edge pruning versions in the figure corresponds to the percentage of remaining edges after pruning. The figure demonstrates that the edge pruning versions with \(\alpha=0.5\) and \(1\) outperform the original FC model, thereby revealing the gain brought by appropriately pruning the redundant edges with low correlation weights. However, the performance of the pruned GEPNet degrades significantly when \(\alpha\) further increases, falling behind the original FC model at \(\alpha=2\) and \(\alpha=4\) for the 4\(\times\)4 and 16\(\times\)16 systems, respectively. This result is because some dominant edges are improperly removed when \(\alpha\) is set too high. However, our scheme with \(\alpha=4\) (over 90% edges pruned) still outperforms EP by a large margin because the network is fine-tuned on the basis of EP, thereby indicating its outstanding ability in balancing performance and complexity. ### _Performance Analysis of Coded MIMO Turbo Receiver_ We consider a \(4\times 4\) coded MIMO system with 16-QAM modulation unless noted otherwise. First, we separately investigate the effect of the three-step training scheme and edge pruning on the proposed turbo receiver under CCs and Rayleigh MIMO channels. Second, we demonstrate the generalization ability of the proposed receiver by evaluating the performance under various mismatches, including channel coding scheme, channel model, SNR, and antenna configuration mismatches. Finally, we present the robustness of the proposed method against imperfect CSI. #### Iv-C1 Impact of the Three-step Training Scheme Fig. 7 presents the BER performance comparison under CCs and \(I=2\). Two GEPNet baselines are set to validate the proposed training scheme: The first is the APP-GEPNet, derived in Step 1 of the training scheme, and the second is the GEPNet detector trained in the uncoded systems and deployed in the proposed turbo scheme as shown in Fig. 3. This baseline has the same loss function and _a posteriori_ outputs as the APP-GEPNet but with the _a priori_ training LLRs set to zero, i.e., \(I_{A}=0\) for Step 1 of the training scheme, denoted as GEPNet-IA0 hereafter. Both of these baselines are equipped with (20) to subtract the _a priori_ information from the outputs \(\hat{\mathbf{L}}_{\mathrm{APP}}\), denoted by "w/ (20)" in the figures.7 Footnote 7: We also conducted tests on the baselines that utilize (3) to demap the output PDF of the GNN module as the extrinsic LLRs. However, we did not observe any performance improvement compared to using (20). Therefore, in order to maintain consistency with the proposed scheme in terms of removing priors and deriving extrinsic LLRs, we equipped the baselines with (20) in our simulations. Fig. 7(a) shows the comparison under code rate \(R_{\mathrm{c}}=1/2\) and word length \(N_{\mathrm{b}}=128\). APP-GEPNet performs poorly, Fig. 6: SER performance for 16-QAM under Rayleigh MIMO channels. Fig. 7: BER performance (\(I=2\)) with CCs for a \(4\times 4\) Rayleigh MIMO channel with 16-QAM. which reflects that simply using (20) cannot completely remove the correlation of the priors at the outputs to generate reliable extrinsic LLRs. GEPNet-IA0 outperforms APP-GEPNet, which can be attributed to the limited _a priori_ information coupled during training, and the correlation problem is less severe. However, this baseline cannot effectively manage the diverse _a priori_ information provided by the decoder as it is trained with \(I_{A}=0\). EXT-GEPNet has significant performance gains over the two baselines because the proposed method not only resolves the information coupling problem systematically to generate appropriate extrinsic information, but also fully utilizes the _a priori_ information during training to generalize to different _a priori_ LLR distributions. Moreover, the EXT-GEPNet-based turbo receiver is also compared with other turbo approaches equipped with the MMSE-based parallel interference cancellation (MMSE-PIC) [44], EP [16], DEP [17], and STS-SD [3]. The figure shows that EXT-GEPNet outperforms MMSE-PIC, EP, and DEP by approximately 1 dB at the BER of \(10^{-5}\) and reveals equivalent or even superior performance to the computationally expensive STS-SD at all tested SNRs. In another aspect, the results from Fig. 7(b) reveal that APP-GEPNet achieves a competitive performance under CCs with a high code rate (\(R_{\rm c}=5/6\)). This can be attributed to the low redundancy channel code resulting in less _a priori_ information being coupled in APP-GEPNet's outputs, and the correlation issue is less pronounced compared to that at a code rate of 1/2. However, EXT-GEPNet still has superiority over the two GEPNet-based baselines, outperforms MMSE-PIC, EP, and DEP, and is only inferior to the high-complexity STS-SD. Fig. 8 provides an intuitive interpretation for the performance in Fig. 7 by analyzing the output LLR distributions of the detector during the training scheme. Fig. 8(a) shows the results under Gaussian _a priori_ LLRs with \(I_{A}=0.8\), emulating the LLRs from the decoder under CCs with \(R_{\rm c}=1/2\). Two steep peaks can be found in the output LLRs of APP-GEPNet even after the explicit subtraction of priors, which likely contain over-optimistic LLR estimations8 with residual priors that lead to the poor performance in Fig. 7(a). Footnote 8: Over-optimistic LLR estimations refer to the phenomenon where the soft outputs of the detector overestimate the reliability, leading to incorrect LLR estimates with erroneously assigned large magnitudes [37, 45]. Step 2 of the training scheme, with output LLRs denoted by "EXT LLR label" in Fig. 8(a), effectively reduces the magnitudes of the two LLR peaks and alleviates the over-estimation issue. Moreover, the output LLRs of EXT-GEPNet closely track the LLR labels derived in Step 2, approaching desirable extrinsic LLRs. We also provide the output LLR distributions of GEPNet-IA0 with (20) as compared to the proposed scheme. The figure reveals that GEPNet-IA0 is less capable of distinguishing bits 0 and 1 than EXT-GEPNet, producing more LLRs with small magnitudes near 0. Fig. 8(b) shows that the difference between the compared distributions is less perceivable when \(I_{A}=0.2\), i.e., limited priors are provided, as compared to \(I_{A}=0.8\), clarifying the competitive performance of APP-GEPNet under \(R_{\rm c}=5/6\). Fig. 9 shows the BER convergence performance across TIs. We consider a \(4\times 4\) Rayleigh MIMO system with CCs (\(R_{\rm c}=1/2\), \(N_{\rm b}=128\)), 16-QAM, and \(\text{SNR}=12\) dB. The figure shows that EXT-GEPNet achieves the fastest convergence speed among the compared schemes. The comparison between EXT-GEPNet and GEPNet-IA0 also confirms that EXT-GEPNet better adapts to different TIs than the original GEPNet without the proposed training scheme. #### Iv-A2 Impact of Edge Pruning In the above analysis of the turbo receiver, all EXT-GEPNets use the FC model without edge pruning. In this subsection, we analyze the impact of Fig. 8: Output LLR distributions of the detectors for a \(4\times 4\) Rayleigh MIMO channel with 16-QAM and \(\text{SNR}=13\) dB. Fig. 9: BER performance across turbo iterations with CCs (\(R_{\rm c}=1/2\), \(N_{\rm b}=128\)) for a \(4\times 4\) Rayleigh MIMO channel with 16-QAM and \(\text{SNR}=12\) dB. the edge pruning method on the balance of performance and complexity. Table I shows the WER performance of EXT-GEPNet-based turbo receivers with different edge pruning factors \(\alpha\) at \(I=2\) and representative SNRs, i.e., \(\text{SNR}=13\) dB for \(R_{\text{c}}=1/2\) and \(\text{SNR}=18\) dB for \(R_{\text{c}}=5/6\). We first focus on the results with \(R_{\text{c}}=1/2\) and \(\text{SNR}=13\) dB. Similar to the results in uncoded systems, the edge pruning versions with \(\alpha=0.5\) and \(1\) do not suffer from performance loss and can even outperform the network without pruning (\(\alpha=0\)). This phenomenon is because the pruning operations can remove the redundant connections of the FC graph and results in a generalizable model less troubled by overfitting. Moreover, the performance loss in uncoded BER caused by a large proportion of edges being removed when \(\alpha\) is high (e.g., \(\alpha=2\) and 81.5% of edges are removed) can be compensated by the strong error correction code. By contrast, the third column of Table I shows a clear trade-off between performance and complexity (the proportion of edges reduced) under CCs with a higher code rate, i.e., lower error correction capability. For example, the edge pruning versions with \(\alpha\) equal to 1, 2, and 4 all experience performance loss compared with the original FC model, with the WER increasing from 4.22e-3 at \(\alpha=0\) to 7.96e-3 at \(\alpha=4\). However, the computational complexity is significantly reduced since approximately 91.8% of edges are pruned, and the cost for the message encoding (16) on these edges is saved. #### V-B3 Under Turbo Codes and Various Channels In this subsection, we first demonstrate that the proposed scheme can be applied without dependence on the channel code. Fig. 10 provides the BER performance comparison under turbo codes with a code rate of \(R_{\text{c}}=1/2\) and a word length of \(N_{\text{b}}=1952\). We directly use the network with \(\text{SNR}_{\text{train}}=13\) dB in Fig. 7(a) for the comparison under turbo codes and Rayleigh MIMO channel in Fig. 10(a) without additional training. Fig. 10(a) reveals that EXT-GEPNet generalizes well to the turbo codes and outperforms the other methods by a large margin, thereby indicating the advantage of the open-loop training scheme in not relying on the choice of channel codes. Furthermore, Fig. 10(b) provides the performance comparison under spatially correlated channels to illustrate the robustness of the proposed scheme against channel mismatch. In particular, the EXT-GEPNet trained with the Rayleigh channel from Fig. 7(a) is tested under the correlated channel with a spatial correlation coefficient \(\varrho=0.5\), and the results are marked by "mismatch" in Fig. 10(b). The figure demonstrates that the gap between the "mismatch" network and the network trained and tested both under the correlated channel can be neglected, thereby verifying the great robustness of the proposed scheme. #### V-B4 Robustness to SNR and Antenna Configuration Fig. 11 illustrates the BER performance of EXT-GEPNet under SNR and antenna configuration mismatches. We train an EXT-GEPNet in a \(4\times 4\) MIMO system (\(N=K=8\)) with \(\text{SNR}=13\) dB and evaluate the network's performance in a \(4\times 2\) MIMO system (\(N=8,K=4\)) with varying SNRs. The channel model is the spatially correlated channel with a correlation coefficient \(\varrho=0.5\), which remains consistent during the training and testing phases. The modulation type used is 16-QAM. Turbo codes with \(R_{\text{c}}=1/2\) and \(N_{\text{b}}=1952\) are applied for channel coding. The figure reveals that the EXT-GEPNet trained under the mismatched antenna (\(N=K=8\)) and SNR configurations exhibits the best BER performance among the compared schemes, except for the EXT-GEPNet trained under the matched antenna (\(N=8,K=4\)) and SNR configurations. Moreover, the performance gap between the matched and mismatched models is within 0.1 dB, indicating the robustness of the proposed method against antenna configuration and SNR mismatches. #### V-B5 Robustness to Imperfect CSI In the above investigation, all receivers were tested under classical channel models with accurate CSI. Next, we further validate the proposed method using the spatially correlated channel model from the fifth Fig. 10: BER performance (\(I=2\)) with turbo codes (\(R_{\text{c}}=1/2,\ N_{\text{b}}=1952\)) for \(4\times 4\) MIMO channels with 16-QAM. generation new radio (5G-NR) [46]. This provides a more realistic evaluation and assesses the algorithm's performance under imperfect CSI to verify its robustness against channel estimation errors. The considered system contains 2 Tx and 4 Rx antennas as uniform linear arrays. The channel correlation level is set to medium correlation A, as specified in the 5G-NR standard [46]. We utilize the LMMSE method [47] with an orthogonal pilot matrix \(\mathbf{X}_{\mathrm{p}}\in\mathbb{C}^{N_{\mathrm{t}}\times N_{\mathrm{p}}}\) composed by \(N_{\mathrm{t}}=2\) columns of the discrete Fourier transform matrix \(\mathbf{F}\in\mathbb{C}^{N_{\mathrm{p}}\times N_{\mathrm{p}}}\) to estimate the complex-valued channel matrix of size \(4\times 2\), where \(N_{\mathrm{p}}\) is the number of pilot vectors in a time slot and is set to 16. Turbo codes with \(R_{\mathrm{c}}=1/2\) and \(N_{\mathrm{b}}=1952\) and 16-QAM modulation are used. Fig. 12 illustrates the BER performance under perfect and estimated CSIs. The neural networks are trained using the 5G-NR channel model and \(\text{SNR}_{\text{train}}=\text{10 dB}\). As expected, all the tested algorithms experience a similar performance loss when transitioning from perfect to imperfect CSIs, indicating that the accuracy of CSI significantly influences the detection accuracy. However, EXT-GEPNet consistently outperforms the other tested algorithms, demonstrating its robustness to imperfect CSI. This observation holds true regardless of the presence of channel estimation errors, thereby highlighting the effectiveness and resilience of the proposed algorithm. ### _Computational Complexity Analysis_ In this subsection, we analyze the computational complexity of different turbo receivers, focusing on MIMO detection complexity since the channel decoding part for different receivers is the same. Table II provides the number of real-valued multiplications (RVMs) and running time required for detecting one symbol vector. The per symbol vector detection complexity is \(C_{\text{det1}}+(I-1)C_{\text{det}}\), where \(C_{\text{det1}}\) denotes the complexity of the first TI without utilizing _a priori_ information, and \(C_{\text{det}}\) is the counterpart with the priors considered. The \(C_{\text{det1}}\) of EXT-GEPNet, denoted as \(C_{\text{GEP1}}\), can be divided into the operations for GNN and EP in each layer of the network. The complexity of the GNN operations contains the cost for the propagation, aggregation, and readout steps. The propagation step is involved in the \(L\)-round of MP, and each round costs \(\big{(}(2N_{\mathrm{u}}+2)N_{\mathrm{h1}}+N_{\mathrm{h1}}N_{\mathrm{h2}}+N_{ \mathrm{h2}}N_{\mathrm{u}}\big{)}K(K-1)\) RVMs for the execution of the function \(\mathcal{M}\) in (16) by \(K(K-1)\) times in the FC model. We further consider the effect of edge pruning in the complexity expression listed in the table, where the cost for this step is reduced by the scale \(\eta\), which is the proportion of the remaining edges after pruning. In addition, the aggregation step performs \(L\) times, and the readout step is conducted once for one forward inference of the GNN, resulting in a cost of \(\big{(}4N_{\mathrm{u}}+3N_{\mathrm{h1}}+9\big{)}N_{\mathrm{h1}}KL\) and \((N_{\mathrm{u}}N_{\mathrm{h1}}+N_{\mathrm{h1}}N_{\mathrm{h2}}+N_{\mathrm{h2}} M)K\) RVMs, respectively. The complexity of the EP procedure is dominated by the matrix inversion in (6a), which requires \(K^{3}+K^{2}+2K\) RVMs. Meanwhile, the matrix-vector multiplications involved in (8), (10), (12), and (13) cost another \(2MK+11K\) RVMs [27]. Additionally, the number of layers \(T\) should be considered for \(C_{\text{det1}}\) of EXT-GEPNet, EP, and DEP, as shown in Table II, because all of them are iterative methods. The additional operations for \(C_{\text{det1}}\) include the incorporation of non-uniform priors for computations of the initial pair and estimated APPs [16]. The corresponding results are listed in Table II. Subsequently, we present a numerical demonstration of the computational complexity for the \(4\times 4\) MIMO system, which is considered in the simulation of the turbo receiver. The number of RVMs of the competing schemes in this system is presented in the last but one column of Table II. EXT-GEPNet requires more RVMs than EP because of the additional GNN operations. Notably, the GNN operations can be more computationally expensive than the matrix inversion in EP for the considered small-sized \(4\times 4\) system because the chosen hyperparameters of the GNN (\(N_{\mathrm{h1}}=64\) and \(N_{\mathrm{h2}}=32\)) are much larger than the system sizes (\(N=K=8\)). However, the matrix inversion becomes dominant when the system size \(K\) grows, because the complexity of matrix inversion is on the order of \(\mathcal{O}(K^{3})\), whereas the complexity of GNN is on the order of \(\mathcal{O}(K^{2})\), as indicated in Table II. Therefore, the complexity ratio between the proposed EXT-GEPNet and EP in large-scale MIMO systems narrows down and tends to 1. Additionally, we analyze the effect of edge pruning on the numerical complexity results. The pruning factor \(\alpha\) of EXT-GEPNet for calculating the number of RVMs is with the same Fig. 11: BER performance (\(I=2\)) of the EXT-GEPNet with antenna configuration and SNR mismatches for a \(4\times 2\) spatially correlated MIMO channel with \(\rho=0.5\). Turbo codes with \(R_{\mathrm{c}}=1/2\) and \(N_{\mathrm{b}}=1952\) are used. The modulation type is 16-QAM. Fig. 12: BER performance comparison (\(I=2\)) between perfect CSI and estimated CSI using LMMSE method for a \(4\times 2\) correlated MIMO channel specified by the 5G-NR standard. Turbo codes with \(R_{\mathrm{c}}=1/2\) and \(N_{\mathrm{b}}=1952\) and 16-QAM are used. choice as that for the performance evaluation in Table I. The column of RVMs in Table II shows that a larger \(\alpha\) leads to a smaller \(\eta\) and thus fewer RVMs, as expected. For example, approximately 35%, 41%, and 49% RVMs can be saved using a pruning factor \(\alpha\) equal to 0.5, 1, and 2, respectively. Considering the performance under \(R_{\text{c}}=1/2\) given by the second column of Table I, the edge pruning versions with \(\alpha\) equal to 0.5, 1, and 2 reach a more desirable trade-off between performance and complexity than the original EXT-GEPNet without pruning. Finally, we compare the complexity of EXT-GEPNet with STS-SD [3]. The number of RVMs for STS-SD is omitted in Table II, as there is no analytical complexity expression, and the required operations are mainly sequential search [3], which cannot be evaluated by multiplication count. Therefore, we perform a running time test for the comparison under the 4 \(\times\) 4 system configuration. Results reveal that the proposed EXT-GEPNet without pruning requires 2.53 times less running time than STS-SD. The improvement of the edge pruning versions in reducing the running time is also remarkable. For example, the edge pruning version with \(\alpha=4\) runs 1.58 and 3.99 times faster than the FC EXT-GEPNet and STS-SD, respectively. Moreover, the STS-SD is impractical to realize for large-scale MIMO systems because its complexity is exponential to the system size [4]. In contrast to STS-SD, the proposed scheme has a complexity that polynomially grows with the system sizes and can make a flexible trade-off between complexity and performance by adjusting the pruning factor \(\alpha\), as well as the hidden layer sizes \(N_{\text{h}1}\) and \(N_{\text{h}2}\) in the GNN. Therefore, EXT-GEPNet can be viewed as a powerful and efficient scheme given the performance indicated in the simulations and the polynomial complexity reported in Table II. ## V Conclusions We proposed a GNN-enhanced EP algorithm for MIMO turbo receivers. We first developed the soft-input soft-output mechanism for GEPNet and the corresponding turbo receiver structure. We then customized a training scheme to establish the EXT-GEPNet, which inherits the superiority of GEPNet in achieving an augmented _a posteriori_ estimation via the GNN and addresses the limitations of failing to produce reliable extrinsic LLRs. The EXT-GEPNet can be deployed in the developed turbo structure to take full advantage of various prior information and achieve stable turbo receiving, outperforming the approach using the original GEPNet. Furthermore, we developed an edge pruning method to eliminate the redundancy in the network, resulting in a significant complexity reduction with negligible performance loss. Complexity analysis and simulation results confirm the efficiency, excellent performance, and adaptability of the proposed scheme.
2305.15811
Unifying gradient regularization for Heterogeneous Graph Neural Networks
Heterogeneous Graph Neural Networks (HGNNs) are a class of powerful deep learning methods widely used to learn representations of heterogeneous graphs. Despite the fast development of HGNNs, they still face some challenges such as over-smoothing, and non-robustness. Previous studies have shown that these problems can be reduced by using gradient regularization methods. However, the existing gradient regularization methods focus on either graph topology or node features. There is no universal approach to integrate these features, which severely affects the efficiency of regularization. In addition, the inclusion of gradient regularization into HGNNs sometimes leads to some problems, such as an unstable training process, increased complexity and insufficient coverage regularized information. Furthermore, there is still short of a complete theoretical analysis of the effects of gradient regularization on HGNNs. In this paper, we propose a novel gradient regularization method called Grug, which iteratively applies regularization to the gradients generated by both propagated messages and the node features during the message-passing process. Grug provides a unified framework integrating graph topology and node features, based on which we conduct a detailed theoretical analysis of their effectiveness. Specifically, the theoretical analyses elaborate the advantages of Grug: 1) Decreasing sample variance during the training process (Stability); 2) Enhancing the generalization of the model (Universality); 3) Reducing the complexity of the model (Simplicity); 4) Improving the integrity and diversity of graph information utilization (Diversity). As a result, Grug has the potential to surpass the theoretical upper bounds set by DropMessage (AAAI-23 Distinguished Papers). In addition, we evaluate Grug on five public real-world datasets with two downstream tasks...
Xiao Yang, Xuejiao Zhao, Zhiqi Shen
2023-05-25T07:47:42Z
http://arxiv.org/abs/2305.15811v2
# Unifying gradient regularization for Heterogeneous Graph Neural Networks ###### Abstract Heterogeneous Graph Neural Networks (HGNNs) are a class of powerful deep learning methods widely used to learn representations of heterogeneous graphs. Despite the fast development of HGNNs, they still face some challenges such as _over-smoothing_, and _non-robustness_. Previous studies have shown that these problems can be reduced by using gradient regularization methods. However, the existing gradient regularization methods focus on either graph topology or node features. There is no universal approach to integrate these features, which severely affects the efficiency of regularization. In addition, the inclusion of gradient regularization into HGNNs sometimes leads to some problems, such as an unstable training process, increased complexity and insufficient coverage of regularized information. Furthermore, there is still short of a complete theoretical analysis of the effects of gradient regularization on HGNNs. In this paper, we propose a novel gradient regularization method called _Grug_, which iteratively applies regularization to the gradients generated by both propagated messages and the node features during the message-passing process. _Grug_ provides a unified framework integrating graph topology and node features, based on which we conduct a detailed theoretical analysis of their effectiveness. Specifically, the theoretical analyses elaborate the advantages of _Grug_: 1) Decreasing sample variance during the training process **(Stability)**; 2) Enhancing the generalization of the model **(Universality)**; 3) Reducing the complexity of the model **(Simplicity)**; 4) Improving the integrity and diversity of graph information utilization **(Diversity)**. As a result, _Grug_ has the potential to surpass the theoretical upper bounds set by DropMessage. 1. In addition, we evaluate _Grug_ on five public real-world datasets with two downstream tasks. The experimental results show that _Grug_ significantly improves performance and effectiveness, and reduces the challenges mentioned above. Our code is available at: [https://github.com/YX-cloud/Grug](https://github.com/YX-cloud/Grug). Footnote 1: AAAI-23 Distinguished Papers ## 1 Introduction As graph neural networks (GNNs) [1; 2] have emerged as powerful architectures for learning and analyzing graph representations. Therefore, in recent years, researchers have actively explored their potential in Heterogeneous Graph Neural Networks (HGNNs) [3; 4; 5]. Heterogeneous graphs consist of multiple types of nodes and edges with different side information. To tackle the challenge of heterogeneity, various HGNNs[6; 7] have been proposed, such as HGT[8], HAN[9], fastGTN[10], SimpleHGNN[11], MAGNN[12], RSHN[13] and HetSANN[14] were developed within recent years. These HGNNs perform well in a variety of downstream tasks, including node classification, link prediction, and recommendation [15; 16]. Despite the rapid development of the HGNNs, they often face problems such as _over-smoothing_ and _non-robustness_[17], which seriously affects the efficiency of downstream tasks. Heterogeneous graphs (HGs) are more ubiquitous than homogeneous graphs in real-world scenarios, but they are more difficult to represent than homogeneous graphs. Because the diversity of data structures of HGs limits the generalization of HGNNs and reduces their robustness as well. Moreover, during message-passing in HGNNs, each node aggregates messages from its neighbors in each layer, HGNNs tend to blur the features of nodes, which is called over-smoothing. Some studies have proved that HGNNs are a special kind of regularization, and their _over-smoothing_ and _non-robustness_ problems mentioned above can be effectively alleviated by using additional regularization to the HGNNs [17; 18]. Gradient regularization is one of the most popular regularization methods to improve the above problem. It integrates various regularizations into one gradient through different strategies. Many widely used regularization methods have greatly improved the performance of HGNNs, such as random dropping methods [19; 20; 21; 22], Laplacian regularization methods [23; 24] and adversarial training methods [25; 26; 27; 28], etc. In this paper, we prove theoretically that these methods are essentially a special form of gradient regularization. Although gradient regularization methods have been widely applied to HGNNs, there are still some open questions that need to be addressed. First, the existing gradient regularization methods only perform regularization on one of the following three information dimensions, namely node, edge and propagation message. These methods do not fully explore which dimension is the optimal solution. Second, the inclusion of gradient regularization into HGNNs sometimes leads to some additional problems, such as unstable training process, and parameter convergence difficulty caused by multiple solutions problems. Lastly, there is still short of a complete theoretical analysis of the effects of gradient regularization on HGNNs. In this paper, we propose a novel gradient regularization method called _Grug_, which can be applied to all message-passing HGNNs. _Grug_ provides a unified framework integrating graph topology and node features by iteratively applying regularization to the gradients generated by both propagated messages and the node features during the message-passing process. We conduct a detailed theoretical analysis of the effectiveness of _Grug_. These theoretical analyses elaborate the advantages of _Grug_ in 4 dimensions: 1) _Grug_ makes the training process more stable by realizing the controllable variance of the input samples **(Stability)**; 2) _Grug_ unify a wider range of regularizations to improve the performance of HGNNs **(Universality)**; 3) _Grug_ ensures the continuity of gradient and the unique solution of the model during gradient regularization, which alleviates parameter convergence difficulty caused by traditional multiple solutions models **(Simplicity)**; 4) _Grug_ can provide more diverse information to the model and improve the integrity and diversity of graph information utilization **(Diversity)**. Thus, _Grug_ has the potential to surpass the DropMessage [22]. In addition, we evaluate _Grug_ on five public real-world datasets with two downstream tasks. Our main contributions are summarized as follows: * We propose a novel gradient regularization method for all message-passing HGNNs named _Grug_. _Grug_ can unify the existing gradient regularizations on HGNNs into one framework by resetting distribution (dropping or perturbation) in certain rules on the feature matrix and message matrix. In other words, the existing methods can be regarded as one special form of _Grug_. * We provide a series of comprehensive and detailed theoretical analyses covering the stability, universality, simplicity and diversity of gradient regularization methods. Which sufficiently explains "Why" and "How" _Grug_ has the potential to surpass the theoretical upper bounds set by DropMessage [22]. * We conduct sufficient experiments for different downstream tasks and the results show that _Grug_ not only surpasses the state-of-the-art (SOTA) performance, but also alleviates over-smoothing, and non-robustness obviously compared to the existing gradient regularization methods. Related Work In general, random dropping methods and adversarial training methods can be regarded as a form of gradient regularization methods [29; 30; 31]. As a representative of random dropping, Dropout [19; 32] has been proven to be effective in many fields [33] with enough theoretical analysis [34]. Besides, applying random dropping to HGNNs can achieve effective performance, such as DropNode [21] and DropEdge[20], which randomly drop nodes and edges during the training process respectively. Additionally, the best random dropping method is DropMessage [22], which applies random dropping on propagated messages. Adversarial Training is proposed as a method of defense against attacks [35], which generate new adversarial samples according to the gradient and adds them to the training set. On this basis, a stronger adversarial attack model PGD [36] has become the mainstream model. In fact, adversarial training has been shown to be closely related to regularization [30; 31]. Recently adversarial training has also been applied to HGNNs [25; 26; 27; 37]. Specially, FLAG [28] tries to establish node-level adversarial training to improve the representation of the graph and achieve satisfactory results. These existing gradient regularization methods can be used to relieve over-smoothing and over-fitting problems [22; 28; 21]. But all of them only perform regularization on one of the following three information dimensions, namely node, edge and propagation message. They do not fully explore which dimension is the optimal solution [25; 27; 37]. Furthermore, we find that the inclusion of gradient regularization into HGNNs sometimes leads to some additional problems, such as unstable training process, and parameter convergence difficulty caused by multiple solutions problems. All the problems of the existing methods are alleviated by the proposed method in this paper. ## 3 Notations and Preliminaries **Notations.** For simplicity of analysis, we assume that the heterogeneous graph contains 2 relations, \(r\) and \(r^{\sim}\). **HG** = (**V**,**E**,**R**) represent the heterogeneous graph, where **V** = {\(v_{1}...v_{p}\)} denotes \(p\) heterogeneous nodes, and **E** = {\(e_{1}...e_{q}\)} denotes \(q\) heterogeneous edges. **R** = {\(r\), \(r^{\sim}\)} represents relations in heterogeneous graph. The node features can be denoted as a matrix \(F=\{x_{1}...x_{n},x_{p}\}\in R^{p\times d}\), where \(x_{i}\) is the feature vector of the node \(v_{i}\), and \(d\) is the dimension of node features. The edges and relations describe the relations between nodes and can be denoted as an adjacent matrix \(A=\{a_{1}...a_{q}\}\subseteq R^{q\times q}\), where \(a_{i}\) denotes the \(i\)-th row of the adjacent matrix, and \(A(i,j)\) denotes the relation between node \(v_{i}\) and node \(v_{j}\). When we apply message-passing HGNNs on **HG**, the message matrix can be represented as \(M=\{m_{1}...m_{k}\}\subseteq R^{k\times d^{\prime}}\), where \(m_{i}\) is a message propagated between nodes, \(k\) is the number of messages propagated on the heterogeneous graph and \(d^{\prime}\) is the dimension of the messages. **Message-passing HGNNs.** In most existing HGNN models, a message-passing framework is adopted, where each node sends and receives messages to/from its neighbors based on their relations. During the propagation process, node representations are updated using both the original node features and the received messages from neighbors. This process can be expressed as follows: \[h_{i}^{(l+1)}=\sigma^{(l)}(h_{i}^{(l)},AGG_{j\in N(i)}\phi^{(l)}(h_{i}^{(l) },h_{j}^{(l)},e_{j,i})) \tag{1}\] where \(h_{i}^{(l+1)}\) denotes the representation of node \(v_{i}\) in the \(l\)-th layer, and \(N(i)\) is a set of adjacent nodes to node \(v_{i}\); \(e_{j,i}\) represents the relation from node \(v_{i}\) to node \(v_{j}\); \(AGG\) denotes the aggregation operation; and \(\sigma^{(l)}\) and \(\phi^{(l)}\) are differentiable functions. Besides we can gather all propagated messages into a message matrix \(M\subseteq R^{k\times d^{\prime}}\), which can be expressed as below: \[M_{i}^{(l+1)}=\phi(h_{i}^{(l)},h_{j}^{(l)},e_{j,i})\] where \(\phi\) denotes the message mapping information, and the row number \(k\) of the message matrix M is equal to the directed edge number in the graph. ## 4 Our Approach In this section, we introduce our newly proposed method called _Grug_, which can be applied to all message-passing HGNNs. We first introduce details of our _Grug_, and further prove that the most common gradient regularization methods in HGNNs such as random dropping and adversarial perturbations, can be unified into our framework. Then, we present the theoretical evidence of the effectiveness of these methods. Furthermore, we analyze the superiority of _Grug_, and provide a series of theoretical proofs from the aspects of stability, universality, simplicity and diversity. ### _Grug_ #### Algorithm description. We apply a single RGCN as the backbone model, which can be formulated as \(f(F,M,W)=FMW\), where \(F\in R^{n\times k}\) denotes the feature matrix, \(M\) indicates the message matrix and \(W\) denotes the transformation matrix. When we use cross-entropy as the loss function, the objective function can be expressed as follows: \[L(f(F,M,W),y)=\sum_{i=0}^{K-1}y_{i}\log\left(softmax(f(F,M,W))\right) \tag{2}\] where \(K\) is the class number and \(y_{i}\) is one hot representation of label \(i\). Different from existing gradient regularization methods, _Grug_ performs gradient regularization both on the feature matrix \(F\) and message matrix \(M\) instead of either one of them and the process of _Grug_ is in Figure 1. For each element \(F_{i,j}\) in \(F\) and \(M_{i,j}\) in \(M\), we generate trained additional elements \(\gamma^{i,j}\) and \(\delta^{i,j}\) to perturb the original distribution, according to uniform distribution \(\gamma^{i,j}\sim Uniform(-\alpha,\alpha)\) and \(\delta^{i,j}\sim Uniform(-\beta,\beta)\). After that, we obtain the perturbed \(F\) and \(M\) by adding each element with its trained additional element. Besides, in the training process, \(\gamma^{i,j}\) and \(\delta^{i,j}\) can be updated according to the gradient of the previous epoch, which can be expressed as follows: \[\delta_{t}=\delta_{t-1}+\beta\cdot\frac{\partial_{\delta}L(f(F,M,W),y),y)}{|| \partial_{\delta}L(f(F,M,W),y),y)||} \tag{3}\] #### Unifying gradient regularization methods. In this section, we show that existing gradient regularization methods can be integrated into our _Grug_. Different from these methods, _Grug_ performs regularization rules on the feature matrix and message Figure 1: Illustrations of _Grug_. Considering the messages propagated by the center node (i.e., Node 1), _Grug_ allows to perturb the feature matrix and message matrix, which implements the regularization of gradients on the feature matrix and message matrix. matrix. First, we demonstrate that Dropout [34], DropEdge [20], DropNode [21], DropMessage [22] and FLAG [28] can all be formulated as norm regularization process with specific dimension. More importantly, we find that existing methods are a special case of _Grug_ in terms of norm dimensions. In summary, _Grug_ can be expressed as a unified framework of gradient regularization methods on HGNNs. **Theorem 1**.: _Dropout, DropEdge, DropNode, DropMessage and FLAG can be formulated in specification norm regularization process, and Grug is the generalization case of these methods._ We show the regularization term of each method below. _Dropout._ dropping the elements \(F_{i,j}=\eta F_{i,j}\) where \(\eta\sim Bernoulli(\mu)\) is equivalent to perform regularization term \(\frac{1}{2}\cdot||\partial_{F_{\zeta}}L||_{0}\) on message matrix. _DropEdge._ dropping the elements \(A_{i,j}=\epsilon A_{i,j}\) where \(\epsilon\sim Bernoulli(\mu)\) is equivalent to perform regularization term \(\frac{\partial_{A}M}{2}\cdot||\partial_{M_{t}}L||_{0}\) on message matrix. _DropNode._ dropping the elements \(F_{i}=\eta F_{i}\) where \(\zeta\sim Bernoulli(\mu)\) is equivalent to perform regularization term \(frac{1}{2}\cdot||\partial_{F_{\zeta}}L||_{0}\) on feature matrix. _DropMessage._ dropping the elements \(M_{i,j}=\tau M_{i,j}\) where \(\tau\sim Bernoulli(\mu)\) is equivalent to perform regularization term \(\frac{1}{2}\cdot||\partial_{M_{t}}L||_{0}\) on message matrix. _FLAG._ Perturbing the feature matrix \(F+\delta\), which is equivalent to perform regularization term \(\varepsilon_{t}||\partial_{F}L||_{q_{t}}\) on feature matrix. _Grug._ Perturbing both the feature matrix \(F+\delta\) and message matrix \(M+\xi\), which is equivalent to perform regularization term \(\varepsilon_{t}\cdot||\partial_{F}L||_{q_{t}}+\xi_{t}\cdot||\partial_{F}L||_{h _{t}}\) on feature matrix and message matrix. Where \(\delta\) and \(\gamma\) are the trained \(F\) perturbation, which are defined as follows: \[\begin{split}\delta_{t}&=\delta_{t-1}+\beta\cdot \frac{\partial_{\delta}L(f(F+\delta_{t-1},M,W),y)}{||\partial_{\delta}L(f(F+ \delta_{t-1},M,W),y)||}\\ \gamma_{t}&=\gamma_{t-1}+\alpha\cdot\frac{\partial_{ \alpha}L(f(F+\gamma_{t-1},M,W),y)}{||\partial_{\gamma}L(f(F+\gamma_{t-1},M,W),y)||}\end{split} \tag{4}\] \(||\cdot||_{p_{t}}\) and \(||\cdot||_{l_{t}}\) are the dual norm of \(||\cdot||_{q_{t}}\) and \(||\cdot||_{h_{t}}\), which are defined as follows: \[\begin{split}||z||_{q}&=sup\{z^{T}x\ ||z||_{p}\leq 1\}\ and\ \frac{1}{p}+\frac{1}{q}=1\\ ||z||_{h}&=sup\{z^{T}x\ ||x||_{l}\leq 1\}\ and\ \frac{1}{h}+\frac{1}{l}=1 \end{split} \tag{5}\] We find that the range of regularization dimension of _Grug_ is from 0 to \(\infty\). While for _Dropout_, _DropEdge_, _DropNode_ and _DropMessage_ the range of regularization dimension is 0. It is demonstrated that these four methods are special cases of \(Grug\). Compared to \(FLAG\), \(Grug\) can perform different-dimension norm regularization on the feature matrix and message matrix respectively instead of maintaining a consistent-dimension. Thorough proof can be found in Appendix A.1. ### Advantage of _Grug_. In this section, we present four further theoretical analyses to elaborate the benefits of _Grug_ from the aspects of stability (decreasing sample variance during the training process), universality(), simplicity (reducing complexity of the model), diversity (improving the integrity and diversity of graph information utilization). **1) Stability.** Existing gradient regularization methods face the problem of sample stability, which is caused by the data noise introduced by different operations, such as dropping and perturbation. Data noise brings the problem of instability in the training process. Generally, sample variance \(V\) can be used to measure the degree of stability. A small sample variance means that the training process will be more stable. **Theorem 2**.: _Grug can adjust the sample variance to achieve the stability of the training process._ Intuitively, _Grug_ achieves regularization by adding uniform distribution to the input samples rather than dropping graph data, which means that the sample variance of the input in each epoch is irrelevant to the graph data. In other words, _Grug_ can tune hyperparameters \(\alpha\) and \(\beta\) to limit the range of sample variance. By reducing the sample variance, _Grug_ diminishes the difference between the feature matrix and message matrix, which stabilizes the training process. Specific range inference can be found in Appendix A.1. **2) Universality.** **Theorem 3**.: _Grug performs regularization on the feature matrix and message matrix, which is the optimal general solution compared to existing regularization methods._ As we summarized in Section 2, all existing gradient regularization methods only perform regularization on one of the following three information dimensions, namely node, edge and propagation message. In order to explore which dimension is the optimal solution, we first need to know the relations between these three information dimensions. The relations between these three information dimensions can be expressed as follows: Regularization on message dimension. \[\partial_{M}L=\partial_{F}L\cdot\partial_{M}F\quad and\quad\partial_{M}L= \partial_{A}L\cdot\partial_{M}A\] Regularization on edge dimension. \[\partial_{A}L=\partial_{M}L\cdot\partial_{A}M\quad and\quad\partial_{A}L= \partial_{F}L\cdot\partial_{M}F\cdot\partial_{A}M\] Regularization on node dimension. \[\partial_{F}L=\partial_{M}L\cdot\partial_{F}M\quad and\quad\partial_{F}L= \partial_{A}L\cdot\partial_{M}A\cdot\partial_{F}M\] The above expressions show that the gradient regularization on each information dimension is transferred to the other two information dimensions. Therefore, regularization on any one information dimension is equivalent to regularization on all three information dimensions theoretically. However, we find that these transfers are not in real-time. Take regularization on the node as an example. In epoch \(e_{i}\), the gradient of \(F\) can be calculated by \(L\), which we express as \(\partial_{F}L_{i}\). There is no gradient of \(F\) generated from \(M\), because the model only keeps the gradient from the loss. However, \(\partial_{F}M\) must exist objectively, because \(M\) can be seen as a function involving \(F\) and \(A\). So we can infer that in epoch \(e_{i}\), \(\partial_{F}M\) is correlated to the previous \(e_{i-1}\) epoch. This non-real-time gradient transfer phenomenon is also the same in the edge dimension and message dimension. So we conclude that regularization on only one information dimension cannot apply regularization in real-time to other dimensions. So, gradient regularization on all three information dimensions simultaneously is a better strategy than gradient regularization on one information dimension. But in practice, we find that compared with node and message dimensions, performing gradient regularization on edge dimension is not only memory consumption, but also reduces the training seed sharply. On the other hand, according to the loss function 2, the node dimension and message dimension, rather than the edge dimension, have the greatest impact on the model loss. Furthermore, performing regularization on the node dimension and message dimension also includes the edge dimension partially. The results of additional experiments show that there are no significant differences between the effects of performing gradient regularization on the edge dimension or not. Therefore, _Grug_ performing regularization on the message and node dimension is the optimal strategy. The Universality is the key for _Grug_ to surpass theoretical upper bounds set by DropMessage and detailed proof are presented in Appendix A.1. **3) Simplicity.** **Theorem 4**.: _Message-passing HGNNs using Grug remain one and unique solution, which maintains parameters easily training._ According to Theorem 1 in section 4.1, models using dropping operations on information dimensions can be uniformly expressed as: \[\begin{split} Drop:\quad\min||\partial_{X}L||_{0}\quad and\quad X =F,M,A\\ \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \ Here \(||\cdot||_{0}\) represents the 0-dimension Paradigm. However, 0-dimension Paradigm is NP-hardness (non-deterministic polynomial-time hardness) [38] and non-convex [39]. Thus, it is hard to find the solution of functions related to dropping, which makes it hard to run at a satisfactory speed. In formula (6), the \(q_{t}\) and \(h_{t}\) are hyperparameters and we can control them by \(\alpha\) and \(\beta\). So our \(Grug\) uses the 1-dimension Paradigm or 2-dimension Paradigm. In theory, the 1-dimension Paradigm [38] and the 2-dimension [40] Paradigm are all convex and remain one and unique solution. Compared to Dropout, DropNode, DropEdge and DropMessage, _Grug_ will not escalate the difficulty of training from the perspective of convexity. The detailed proof is exhibited in Appendix A.1. **4) Diversity.** **Theorem 5**.: _Grug is able to maximize the utilization of diverse information in heterogeneous graph._ Utilization of diverse information contains 2 aspects: 1) Obtaining more diverse information 2) Keeping original information diversity. Obtaining more diverse information is defined as the total number of sample distributions generated during the training process. We propose that the total number of elements in the message matrix is \(Z\). For dropping methods, the number of new data distributions is generated by DropMessage, which can be expressed as \(C_{Z}^{\frac{Z}{Z}}\) and \(C\) is a combination in mathematics. Our _Grug_ can continue to generate new data distribution until the end of training. keeping original information diversity is defined as the total number of preserved feature dimensions. In this term, DropMessage has been proven to perform best in dropping operations. However, _Grug_ seems to lose initial information rarely. This is because perturbation \(\delta\) and \(\gamma\) will gradually reduce the value of each element as the training progresses. So _Grug_ may lose information only in the initial stage of training. From the perspective of these 2 aspects, _Grug_ performs better than existing methods. Although _Grug_ may lose some information as other methods, it generates immense new sample distributions, thus maximizing the utilization of diverse information. More details of the derivation are shown in Appendix A.1 and supplementary experiments can be found in Appendix A.3. ## 5 Experiments ### Experimental Setup In this section, we present the results of our experiments on five real-world datasets for node classification and link prediction tasks. Our experiments aim to answer the following questions: 1) How does _Grug_ compare to other gradient regularization methods on HGNNs in terms of performance? 2) Can _Grug_ improve the robustness of HGNNs? 3) Does _Grug_ perform better than other methods in addressing over-smoothing problems? **Datasets** We employ five commonly used heterogeneous graph datasets in our experiments, which are _ACM_, _DBLP_, _IMDB_, _Amazon_ and _LastFM_. _Node Classification Datasets._ Labels in node classification datasets are split according to 20% for training, 10% for validation and 70% for test in each dataset. Note that node classification datasets are the same version used in the GTN [41]. * **ACM** is a paper citation network, containing 3 types of nodes (papers (P), authors (A), subjects (S)), 4 types of edges (PS, SP, PA, AP). * **DBLP** is a bibliography website of computer science. It contains 3 types of nodes (papers (P), authors (A), conferences (C)), 4 types of edges (PA, AP, PC, CP). * **IMDB** is a website about movies and related information. It contains 3 types of nodes (movies (M), actors (A), and directors (D)) and labels are genres of movies. _Link Prediction Datasets._ Edges in link prediction datasets are split according to 25% for training, 5% for validation and 60% for the test in each dataset. Note that link prediction datasets are subsets pre-processed by HGB dataset [11]. * **Amazon** is an online purchasing platform and it consists of electronics products with co-viewing and co-purchasing links between them. * **LastFM** is an online music website. It contains 3 types of nodes (artist, user, tag) and 3 types of edges (artist-tag, user-artist, user-user). **Baseline methods.** We compare our proposed _Grug_ with other existing gradient regularization methods, including Dropout (2012) [19], DropNode (2020) [21], DropMessage (2022) [22], and FLAG (2020) [28]. We adopt these methods on backbone models and compare their performances on different datasets. **Backbone models.** In order to compare the differences of gradient regularization methods fairly, we choose the basic HGNN models to ensure that the experimental results are not affected by the difference of the models. So we consider two mainstream HGNNs as our backbone models: RGCN [4] and RGAT [5]. We implement the gradient regularization methods mentioned above on backbone models. ### Comparison Results Table 2 and Table 1 summarize the overall results. For node classification tasks, we employ the Micro F1 score and Macro F1 score as Metrics on 3 datasets (ACM, DBLP, IMDB). For link prediction tasks, the performance is measured by AUC-ROC on Amazon and LastFM. Considering the space limitation, some additional experiments are presented in the Appendix A.3. Overall, _Grug_ performs well in all scenarios (node classification and link prediction), indicating its stability in diverse situations. For node classification tasks, we have 72 settings, each of which is a combination of different backbone models and datasets in different metrics (e.g., RGCN-ACM-Macro F1). Table 1 shows that _Grug_ achieves the optimal results in all 72 settings. As to 24 settings under the link prediction task, _Grug_ achieves the optimal results in 23 settings, and sub-optimal results in 1 setting. Besides, _Grug_ performs more stable on all datasets and all tasks than other methods clearly. Taking Dropout as the counterexample, it appears strong performance on the DBLP dataset but shows a clear decrease on the ACM dataset and IMDB dataset. In terms of tasks, the performance of Dropout on link prediction is far worse than that on node classification. Dropout on node classification obtains an average accuracy decline of 2.97% compared to _Grug_. This numerical gap has widened to 7.69% on link prediction. A reasonable explanation is that the information dimensions operated by distinct methods vary from each other as shown in section 4.2. With the generalization of its operation on node and message \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & Dataset & ACM & DBLP & IMDB \\ \hline \multirow{8}{*}{**Grug**} & Method & Macro F1 & Macro F1 & Micro F1 & Macro F1 & Micro F1 & Macro F1 \\ \cline{2-8} & Clean [4] & 90.36\(\pm\)0.60 & 90.44\(\pm\)0.59 & 93.67\(\pm\)0.15 & 92.77\(\pm\)0.16 & 56.05\(\pm\)1.84 & 55.43\(\pm\)1.64 \\ & \multirow{4}{*}{Dropout [19]} & 90.86\(\pm\)0.39 & 90.91\(\pm\)0.40 & 94.35\(\pm\)0.14 & 93.53\(\pm\)0.14 & 57.63\(\pm\)1.01 & 56.04\(\pm\)0.79 \\ & & DropNode [21] & 90.68\(\pm\)0.75 & 90.74\(\pm\)0.77 & 93.58\(\pm\)0.71 & 93.12\(\pm\)0.71 & 57.38\(\pm\)0.37 & 56.20\(\pm\)0.50 \\ & & DropMessage [22] & 91.57\(\pm\)0.19 & 91.57\(\pm\)0.21 & 94.69\(\pm\)0.13 & 93.96\(\pm\)0.13 & 57.60\(\pm\)0.84 & 56.48\(\pm\)0.49 \\ \cline{2-8} & \multirow{2}{*}{FLAG [28]} & 92.43\(\pm\)0.37 & 92.58\(\pm\)0.34 & 94.22\(\pm\)0.20 & 93.99\(\pm\)0.24 & 56.74\(\pm\)0.33 & 55.63\(\pm\)0.40 \\ & & _Grug_ & **93.27\(\pm\)0.13** & **93.33\(\pm\)0.13** & **95.08\(\pm\)0.08** & **94.17\(\pm\)0.11** & **59.92\(\pm\)0.56** & **58.87\(\pm\)0.71** \\ \hline \multirow{8}{*}{**Grug**} & Clean [4] & 89.32\(\pm\)0.13 & 89.34\(\pm\)0.13 & 92.98\(\pm\)0.61 & 92.15\(\pm\)0.63 & 53.12\(\pm\)0.80 & 27.55\(\pm\)0.78 \\ & & Dropout [19] & 89.38\(\pm\)0.81 & 89.51\(\pm\)0.76 & 93.55\(\pm\)0.48 & 92.69\(\pm\)0.50 & 54.85\(\pm\)0.31 & 53.63\(\pm\)0.26 \\ \cline{1-1} & & DropNode [21] & 89.86\(\pm\)0.67 & 89.98\(\pm\)0.60 & 93.21\(\pm\)0.67 & 92.42\(\pm\)0.72 & 53.86\(\pm\)1.56 & 52.80\(\pm\)1.60 \\ \cline{1-1} & & DropMessage [22] & 90.57\(\pm\)0.62 & 90.57\(\pm\)0.62 & 93.89\(\pm\)0.35 & 93.01\(\pm\)0.39 & 56.48\(\pm\)0.76 & 55.29\(\pm\)0.78 \\ \cline{1-1} \cline{2-8} & \multirow{2}{*}{FLAG [28]} & 91.46\(\pm\)0.59 & 91.53\(\pm\)0.58 & 93.38\(\pm\)0.45 & 92.42\(\pm\)0.35 & 56.09\(\pm\)0.68 & 54.47\(\pm\)0.66 \\ \cline{1-1} & & **Grug** & **92.84\(\pm\)0.18** & **92.92\(\pm\)0.19** & **94.21\(\pm\)0.21** & **93.50\(\pm\)0.22** & **57.79\(\pm\)0.37** & **56.17\(\pm\)0.18** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison results of different gradient regularization methods for node classification. The best results are in bold. \begin{table} \begin{tabular}{c c c c} \hline \hline & Dataset & Amazon & LastFM \\ \hline \multirow{8}{*}{**Grug**} & Method & AUC-ROC & AUC-ROC \\ \cline{2-4} & Clean [4] & 70.54\(\pm\)0.00 & 56.66\(\pm\)5.92 \\ & Dropout [19] & 72.83\(\pm\)5.17 & 56.95\(\pm\)2.37 \\ & DropNode [21] & 70.81\(\pm\)3.85 & 56.97\(\pm\)3.86 \\ & DropMessage [22] & **81.80\(\pm\)0.57** & 58.01\(\pm\)4.65 \\ \cline{2-4} & FLAG [28] & 78.87\(\pm\)0.81 & 56.96\(\pm\)3.42 \\ & _Grug_ & 80.93\(\pm\)0.67 & **60.37\(\pm\)1.24** \\ \hline \multirow{8}{*}{**Grug**} & Clean[4] & 81.98\(\pm\)2.36 & 78.09\(\pm\)3.97 \\ & Dropout [19] & 81.74\(\pm\)6.72 & 78.94\(\pm\)3.43 \\ \cline{1-1} & DropNode [21] & 83.36\(\pm\)0.77 & 78.10\(\pm\)2.85 \\ \cline{1-1} & DropMessage [22] & 85.16\(\pm\)0.35 & 78.50\(\pm\)0.52 \\ \cline{1-1} \cline{2-4} & FLAG [28] & 85.59\(\pm\)0.25 & 78.66\(\pm\)1.86 \\ \cline{1-1} & **Grug** & **86.84\(\pm\)0.26** & **80.39\(\pm\)2.24** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison results of different gradient regularization methods for link prediction. The best results are in bold. dimensions, _DropMessage_ obtains a smaller inductive bias. Therefore, _Grug_ is more applicable for most scenarios. ### Additional Results **Over Smoothing analysis.** In this section, we investigate the issue of over-smoothing in HGNNs, which occurs when node representations become indistinguishable as the depth of the network increases, leading to poor results. To address this problem, we compare the effectiveness of different gradient regularization methods, taking into account the information diversity discussed in Section 4.2. As mentioned in section 4.2, _Grug_ obtaining more diverse information in the training process, it is expected to perform better holdout over-smoothing in theory. To verify this, we conduct experiments on two datasets, ACM and IMDB, using RGCN as the backbone model. As Figure 2 suggests, we can find that all gradient regularization methods can alleviate over-smoothing and _Grug_) outperforming other methods due to their diverse information. Our proposed _Grug_ exhibits consistent superiority in performance across all layers (from layer 1 to layer 7). as shown in Figure 2. **Robustness analysis.** We evaluate the robustness of different gradient regularization methods by measuring their ability to deal with perturbed heterogeneous graphs. In more detail, we randomly add a certain ratio of edges on ACM and DBLP datasets and address the node classification task. It is found in Figure 3 that _Grug_ has positive effects and results drop the least when the perturbation rate increase from 0.1 to 0.4. Besides, our proposed _Grug_ shows its stability and outperforms other gradient regularization methods in noisy situations. ## 6 Conclusion In this paper, we propose _Grug_, a general and effective gradient regularization method for HGNNs. We first unify existing gradient regularization methods to our framework via performing regularization Figure 3: Model Performances Against Attacks. Figure 2: Over-Smoothing Analysis. according to the gradient on the feature matrix and message matrix. As _Grug_ operates meticulously on feature and message matrix, it is a generalization of existing methods, which shows greater applicability in general cases. Besides, we provide the theoretical analysis of the superiority (stability, simplicity and diversity) and effectiveness of _Grug_. By conducting sufficient experiments for multiple tasks on five datasets, we show that _Grug_ surpasses the SOTA performance and alleviates over-smoothing, and non-robustness obviously compared to the traditional gradient regularization methods.
2306.16684
Decomposing spiking neural networks with Graphical Neural Activity Threads
A satisfactory understanding of information processing in spiking neural networks requires appropriate computational abstractions of neural activity. Traditionally, the neural population state vector has been the most common abstraction applied to spiking neural networks, but this requires artificially partitioning time into bins that are not obviously relevant to the network itself. We introduce a distinct set of techniques for analyzing spiking neural networks that decomposes neural activity into multiple, disjoint, parallel threads of activity. We construct these threads by estimating the degree of causal relatedness between pairs of spikes, then use these estimates to construct a directed acyclic graph that traces how the network activity evolves through individual spikes. We find that this graph of spiking activity naturally decomposes into disjoint connected components that overlap in space and time, which we call Graphical Neural Activity Threads (GNATs). We provide an efficient algorithm for finding analogous threads that reoccur in large spiking datasets, revealing that seemingly distinct spike trains are composed of similar underlying threads of activity, a hallmark of compositionality. The picture of spiking neural networks provided by our GNAT analysis points to new abstractions for spiking neural computation that are naturally adapted to the spatiotemporally distributed dynamics of spiking neural networks.
Bradley H. Theilman, Felix Wang, Fred Rothganger, James B. Aimone
2023-06-29T05:10:11Z
http://arxiv.org/abs/2306.16684v1
# Decomposing spiking neural networks with Graphical Neural Activity Threads ###### Abstract A satisfactory understanding of information processing in spiking neural networks requires appropriate computational abstractions of neural activity. Traditionally, the neural population state vector has been the most common abstraction applied to spiking neural networks, but this requires artificially partitioning time into bins that are not obviously relevant to the network itself. We introduce a distinct set of techniques for analyzing spiking neural networks that decomposes neural activity into multiple, disjoint, parallel threads of activity. We construct these threads by estimating the degree of causal relatedness between pairs of spikes, then use these estimates to construct a directed acyclic graph that traces how the network activity evolves through individual spikes. We find that this graph of spiking activity naturally decomposes into disjoint connected components that overlap in space and time, which we call Graphical Neural Activity Threads (GNATs). We provide an efficient algorithm for finding analogous threads that reoccur in large spiking datasets, revealing that seemingly distinct spike trains are composed of similar underlying threads of activity, a hallmark of compositionality. The picture of spiking neural networks provided by our GNAT analysis points to new abstractions for spiking neural computation that are naturally adapted to the spatiotemporally distributed dynamics of spiking neural networks. ## 1 Introduction The robustness, flexibility, and efficiency of natural intelligence is thought to emerge from the unique computational characteristics of the brain's highly recurrent and dynamical spiking neural networks. Despite an explosion of neuroscientific data at all organizational levels, our understanding of spiking neural computation is tentative. Both neuroscientists and researchers in neural computing seek the right abstractions. A theoretical framework is required to interpret the myriad of patterns of spiking activity generated by spiking neural networks and how these patterns relate to neural computations. In particular, spiking neural networks lack a satisfactory theory of compositionality, namely, how complex computations are built from simpler parts. Defining a useful abstraction for spiking neural computation requires specifying the meaningful relations between spikes that support these computations. Abstractions emerge from equivalences, and specifying a meaningful relation also specifies an appropriate notion of equivalence - for example, the abstraction of binary bits in conventional computer architectures partitions the states of the physical system into the classes "1" and "0". Most computational abstractions of recurrent spiking neural networks depend on the concept of a population state vector. In these approaches, simultaneous population activity forms a distributed representation of computational variables as a vector in a high-dimensional space [11]. The action of the spiking network corresponds to a dynamical system evolving the state vector to the desired result of the computation. The population state vector interpretation requires assuming that every neuron agrees that simultaneity with respect to the time bins is the meaningful relation between spikes that defines neural computations. If conduction delays between neurons are exactly equal or negligible, this may be valid. However, biological evidence suggests that delays and other asynchronous attributes of spiking networks are critical for computation [5]. Regardless, the validity of reducing neural activity to a sequence of state vectors is an empirical question [2]. Another aspect of spiking networks is temporal variability. Spiking neural networks rarely respond to identical input with temporally identical spike trains, even though they are biophysically capable of such precision [9]. Within the state vector abstraction, this means that the computation must be expressed probabilistically, because the state vectors rarely trace identical sequences through time. Mounting evidence suggests that some neural variability is due to flexibly warping invariant spike patterns in time [13] or through the unknown mechanisms behind representational drift [10]. Attributing neural variability to stochastic noise may overlook invariant patterns in less observer-centric descriptions. In other words, neural activity that looks variable may be exquisitely precise with respect to a different point of view [9]. Overall, the assumptions imposed by current abstractions of spiking networks are in tension with the observed properties of biological networks. This motivates a search for alternative descriptions of spiking neural networks that are more naturally adapted to spiking dynamics and can serve as a foundation for computational abstractions. In this work, we introduce an alternative approach to decomposing spiking neural network activity that avoids many of these assumptions. Our analyses reveal neural activity intrinsically partitions itself into disjoint causal threads of activity, that we refer to as Graphical Neural Activity Threads (GNATs). We analyze the GNATs that emerge from a spiking neural network simulation receiving both random and patterned stimulation through externally-imposed spikes. We also provide a technique for identifying analogous GNATs that recoccur at different times in our simulations. The GNATs display many properties that suggest they are a useful computational abstraction for spiking neural networks, such as spatiotemporal parallelism (more than one GNAT can exist in the same space or time) and compositionality (GNATs are built from smaller threads that are flexibly reused and rearranged). ## 2 Related Work Our definition of GNATs is related to the concept of polychronization [8]. Polychronization is the observation that the simultaneous arrival of presynaptic spikes after heterogeneous axonal conduction delays leads to the formation of polychronous groups, or groups of spikes that repeat with precise temporal relationships. However, GNATs subsume polychronous groups and are more general. Furthermore, our algorithm for extracting analogous threads is more efficient than the original brute-force algorithms for extracting polychronous groups. GNATs also relate to the concept of cell assemblies. Cell assemblies are "transiently active ensembles of neurons" [3] thought to support numerous neural computations. Traditionally, cell assemblies are defined by near-simultaneously active cells. GNATs generalize cell assemblies to transiently active collections of neurons arranged flexibly in time, bound together by the causal relations that define spiking dynamics, instead of temporal proximity with respect to an external clock. ## 3 Graphical Neural Activity Threads The causal action of presynaptic spikes on postsynaptic neurons is the fundamental interaction that defines spiking network dynamics. Thus, information processing in spiking networks must be supported on the causal relations between individual spikes. Synaptic interactions between neurons define a mathematical relation between spikes, and directed graphs usefully describe the structure of these relations. We seek to construct a graph out of spiking neural activity that captures these causal relations by defining a quantity that estimates the degree of causal relatedness between spikes. When an excitatory spike arrives at a postsynaptic neuron, its synaptic action brings the postsynaptic neuron closer to the spiking threshold by raising its membrane potential. The synaptic weight determines the magnitude of this increase, so the degree of causal relatedness depends directly on the weight. Postsynaptic potentials are not instantaneous, but decay in time. Likewise, the causal action Figure 1: Graphical neural activity threads. a) (top) Spike raster from 4000 excitatory cells in a geometrically-structured network freely evolving with spike-timing dependent plasticity for 10 minutes followed by 5 minutes of evolution with fixed synaptic weights. (bottom) The same spike train superimposed with directed edges representing strong causal relations between pairs of spikes. Edges and spikes are colored according to their membership in disjoint connected components. Vertical lines indicate 20ms time bins. b) Illustration of the causal relation computation. c) Distribution of \(-\log\Omega\) for all pairs of spikes from a 5 minute recording. d) Same as c, but for shuffled spike trains. The peak is eliminated, indicating \(\Omega\) is capturing significant synaptic interactions in the network. e) Duration distribution for GNATs from the simulation depicted in a. of a presynaptic spike on a postsynaptic neuron lingers beyond the arrival of the presynaptic spike, and can interact with other presynaptic spikes to ultimately push the postsynaptic neuron beyond the spiking threshold. This means any quantitative estimate of the causal influence of presynaptic spikes should include this temporally decaying influence. We define a quantity, \(\Omega\), that captures these intuitions about the causal action of spikes in a single real value associated to a pair of spikes: \[\Omega(t_{\alpha},t_{\beta})=\frac{W_{\alpha\beta}}{||W||}\theta[t_{\alpha}-t_ {\beta}-\delta_{\alpha\beta}]e^{\frac{-(t_{\alpha}-t_{\beta}-\delta_{\alpha \beta})}{\tau}} \tag{1}\] Here, \(t_{\alpha}\) and \(t_{\beta}\) are the times of the postsynaptic and presynaptic spikes, respectively. \(W_{\alpha\beta}\) is the synaptic weight from neuron \(\beta\) to neuron \(\alpha\), and \(||W||\) is the norm of all synaptic weights onto neuron \(\alpha\). \(\delta_{\alpha\beta}\) is the conduction from neuron \(\beta\) to neuron \(\alpha\). \(\theta\) represents the Heaviside step function, ensuring that \(\Omega\) is nonzero only for spike pairs that arise from synaptically connected neurons and are separated by at least one axonal conduction delay. The exponential term captures the decay of causal influence as the temporal separation of the spikes increases, with \(\tau\) a free parameter that determines the rate of decrease. We define a directed acyclic graph we call the _activity graph_ from spike trains by computing \(\Omega\) for each spike pair from excitatory neurons. Spike pairs with large \(\Omega\) correspond to a significant causal relation between the spikes. We include a directed edge from a presynaptic spike to a postsynaptic spike in the activity graph if \(\Omega\) exceeds a threshold, explained below. Because of the exponential decay, we can limit our computation to spike pairs that are separated in time by less than a few time constants, significantly improving the efficiency of our construction on large spike trains. Fig. 1a, top, shows a spike train from 4000 excitatory neurons over two seconds in a simulated spiking network. Fig. 1a, bottom, shows the activity graph constructed from the spikes above. This directed acyclic graph decomposes into disjoint weakly connected components, which we illustrate by coloring each disjoint component separately. A weakly connected component in a directed graph is a set of vertices that are maximally connected when the direction of the edges is forgotten. We refer to the disjoint weakly connected components of the activity graph as Graphical Neural Activity Threads (GNATs). We found that the negative logarithm of \(\Omega\) displays a strong peak at large \(\Omega\) values (Fig. 1c, \(\tau\) = 5ms), that disappears for shuffled spike trains (Fig. 1d), indicating \(\Omega\) successfully quantifies information transmission by the causal action of presynaptic spikes on postsynaptic neurons. This property also defines a natural threshold for defining the edges in our activity graph: we choose our threshold to correspond to the transition between the peak and flat parts of the distribution of \(-\log\Omega\), here, approximately 5. If \(\tau\) changes in the definition of \(\Omega\), then the peak moves to a new position, but the shape of the distribution stays the same. Changing the threshold to match the new peak in the new distribution ensures that the edges included are independent of the specific values of \(\tau\) and the threshold. Thus, our definition of the activity graph corresponds to the intrinsic activity of the network. The GNATs that emerge from spiking network activity show how individual spikes contribute to the global network dynamics. Interestingly, we found that spikes belonging to the same time bin often belong to causally-disjoint threads. If the causal action of any two spikes eventually converges, those spikes will belong to the same GNAT, by definition. Therefore, these disjoint spikes do not jointly contribute to downstream effects. Thus, such spikes are likely computationally independent (though, disjoint spikes may interact through inhibition). We also observed many isolated spikes, likely due to the random and patterned stimulation (see section 7), that do not causally contribute to any GNAT. These spikes are likely computationally irrelevant, but this is not obvious without the perspective provided by the GNATs. ## 4 Finding Analogous Threads To ground a computational interpretation of the GNATs, we must define how to compare GNATs, and in particular, identify when similar GNATs reoccur. Indeed, identifying recurring neural sequences is a major research direction in contemporary neuroscience and is fundamental to current theories of memory (e.g. hippocampal replay [4]) and spatiotemporally structured natural behaviors (e.g. birdsong [6]). Typically, recurring neural sequences are identified through comparing absolute spike times, which are defined by an external clock. Time warping studies have shown that neural activity that looks unstructured with respect to an external clock nevertheless contains significant structure that exists on flexible timescales across trials [13]. Because an external clock has no intrinsic relevance to the neural circuit, it is unreasonable to assume that repeat sequences must occur with respect to absolute spike times. We construct an alternative definition of neural sequences using the intrinsic causal relations of the spiking activity captured by the GNATs. Because the relevant relation between spikes is causal rather than temporal, we define repeat neural sequences as repeated isomorphic subgraphs of the causal activity graph. Two isomorphic subgraphs identify spikes produced through the same synaptic interactions. For arbitrary graphs, the modular product [1] is a graph product \(G\times H\) defined for two graphs \(G=(V_{G},E_{G})\) and \(H=(V_{H},E_{H})\) as the graph with vertex set \[V_{G\times H}=V_{G}\times V_{H} \tag{2}\] and edge set \[E_{G\times H}=\{(u,v)\rightarrow(u^{\prime},v^{\prime})\,|\,[(u,u^{\prime}) \in E_{G}\wedge(v,v^{\prime})\in E_{H}]\vee[(u,u^{\prime})\notin E_{G}\wedge(v,v^{\prime})\notin E_{H}]\} \tag{3}\] In other words, the vertex set of \(V_{G\times H}\) is the cartesian product of \(V_{G}\) and \(V_{H}\), and the edge set is the exclusive NOR of \(E_{G}\) and \(E_{H}\). Importantly, cliques in the modular product correspond to isomorphic induced subgraphs between \(G\) and \(H\)[1]. Thus, if we could construct the modular product of our neural activity graphs, cliques would identify isomorphic causal neural sequences. Enumerating cliques in graphs is hard in general, and the modular product is never sparse, so this approach for finding isomorphic neural activity subthreads is computationally intractable. Instead, we define a weaker modular product that is efficiently computable but still extracts causally-similar neural sequences. Figure 2: Extracting analogous subthreads. a) Given two activity graphs, the second order activity graph is built out of the Cartesian product of spikes. Connected components in the second-order graph correspond to analogous subthreads. b) Example of an analogous subthread extracted from a 5-minute simulation of a 5000-neuron spiking network. Spikes with analogous causal relations reappear at different times in the simulation with slightly different timing relationships. c) Another example of an analogous subthread from the same simulation. Given two activity graphs \(A_{i}\) and \(A_{j}\), we define a second order activity graph as follows. Let \(A_{i,n}\) correspond to the set of spikes from neuron \(n\) in activity graph \(i\). The vertex set of the second order activity graph associated to activity graphs \(A_{i}\) and \(A_{j}\) is given by \[V_{A_{i}\times A_{j}}=\bigcup_{n}A_{i,n}\times A_{j,n} \tag{4}\] and edge set given by \[E_{A_{i}\times A_{j}}=\left\{(u_{a},u_{b})\rightarrow(v_{a},v_{b})\,|\,(u_{a},v_{a})\in E_{A_{i}}\wedge(u_{b},v_{b})\in E_{A_{j}}\right\} \tag{5}\] Restated, the second order vertices are pairs of spikes from individual cells, and second order edges correspond to pairs of edges that both appear in the first order activity graph. Thus, a second order edge corresponds to the reappearance of a causal edge in the first order activity graph. With this definition, repeat causal neural sequences correspond to weakly connected components in the second order activity graph (Fig. 2a). These weakly connected components do not always correspond to exactly isomorphic subgraphs, so to avoid confusion, we refer to these second-order connected components as _analogous subthreads_. Because of the locality in space and time of causal edges, we can efficiently construct the second order activity graph by partitioning the cartesian product of each neuron's spikes using a quadtree data structure, and using the efficient spatial indexing in quadtrees to quickly find putatively connected pairs of spikes in the second order activity graph. We applied our extraction algorithm to the spike trains generated by our example network (Sec. 7), and found significant numbers of analogous subthread pairs (n=857919 in a 5-minute simulation). Most of these pairs consisted of less than 15 spikes, but we extracted \(\sim 2000\) analogous thread pairs containing 15 or more spikes. Depending on the simulation, analogous thread pairs contained a maximum of 90 to several hundred spikes. Figs. 2b and 2c show example analogous thread pairs found by our algorithm. The analogous threads in this example contain similar causal relations between spikes, even though the absolute timing relationships between these patterns are not preserved. Figure 3: Relationships between GNATs induced by analogous subthreads. a) The same analogous subthread as Fig 2c, shown embedded in their maximal GNATs. Each analogous subthread links two disjoint threads. b) Multigraph showing the relation between threads induced by the largest 2000 analogous subthreads. Each vertex corresponds to a maximal GNAT (i.e. as depicted by distinct colors in Fig. 1a), and edges correspond to an analogous subthread. This graph of GNAT relations shows strong clustering, naturally partitioning the GNATs into classes. The graph is embedded with a spring layout with edge weights given by the number of spikes shared between subthreads. Parallel edges in the multigraph are not shown for clarity. ## 5 Relations between GNATs Analogous subthreads are embedded in larger, maximally connected GNATs by definition. Fig. 3a shows the pair of analogous subthreads from Fig. 2c embedded in their larger contexts. In this way, individual GNATs can be thought of as composed of subthreads that can reappear in distinct GNATs. To understand the compositional structure of GNATs induced by analogous subthreads, we constructed a multigraph with disjoint, maximally-connected GNATs (Fig. 1a) as vertices and edges between pairs of GNATs corresponding to the analogous subthreads (Fig. 2b,c) that appear in both threads of the pair. Fig. 3b shows an example of this GNAT composition graph including the 2000 largest analogous subthreads found in a 5-minute simulation. We found that individual GNATs strongly cluster into distinct classes based on their shared subthreads. We extracted these classes of GNATs and plotted the temporal extent of each member of each class on top of the original spike train. Fig. 4a shows the individual GNAT classes and their assigned colors. Fig. 4 b and c show the temporal extent of each GNAT colored according its class membership over a five minute simulation. Fig. 4d shows the temporal extent of GNATs belonging to each class over four trials of repeated stimulation according to a fixed input spike pattern. Like-colored intervals indicate the temporal extent of GNATs embedded in the spiking activity belonging to the same class, meaning they share analogous subthreads. The network's response to the spike pattern on each trial shows regularity and flexibility in the appearance of GNATs of each class. While this response is variable, it is composed of activity threads built from analogous causal sequences. This result highlights the benefits of the GNAT approach in capturing the challenging balance between between flexibility and rigidity in spiking networks. Figure 4: GNATs reveal compositionality of spiking responses. a) Individual GNAT classes extracted from the graph in Fig. 3b. b,c) Intervals during the 5-minute simulation corresponding to GNAT classes from a. The sequence of GNAT classes shows regularity and flexibility simultaneously. d) Intervals corresponding to GNAT classes overlaid on spiking responses to a fixed input pattern. The spiking response is highly variable trial to trial, but GNATs embedded within each response are composed of analogous causal sequences not apparent from an inspection of solely the temporal relations between spikes. ## 6 Conclusions We have shown that spiking neural activity naturally decomposes into disjoint, discrete threads. These threads are composed of causal sequences that can be reused and flexibly recombined, and we have provided an algorithm that extracts analogous subthreads from larger spike trains. As such, GNATs are promising candidates for elementary computations in spiking networks. They exhibit properties such as spatial and temporal parallelism, key intuitions of spiking neural computation inherited from neuroscience that have so far resisted a rigorous definition. Our definition of GNATs avoids many of the artificial assumptions fundamental to traditional abstractions of spiking neural computation. In many approaches, time is partitioned into bins according to the clock of an external observer. Simultaneous neural activity in each bin defines a population state vector that represents the computational variables, and the spiking computation is given by the sequence of state vectors (Fig. 5a). In this scheme, the state-to-state transition is computed in parallel, but the effective computation is still a serial sequence of state vectors. GNATs are fully asynchronous - they require no time bins, and more than one GNAT can exist over any time interval. Furthermore, state vector schemes assume that spikes belonging to the same bin belong to the same computation. Figure 5: Computational abstractions for spiking neural networks from GNATs. a) Typical spiking analyses require partitioning time into bins. Simultaneous spikes in each bin form a distributed representation of computational variables in an effectively serial computation. b) Similar spike time patterns could emerge from distinct underlying causal processes. These distinctions are not captured through the state vectors but are captured by GNATs incorporating spike times and network connectivity in a single structure. c) Efferent and afferent spikes in spiking networks interact with ongoing activity threads to determine the overall computation. GNATs allow for tracing the causal history of each spike from output back to input, providing a detailed account of spiking computations. Our GNAT analysis reveals that spikes belonging to the same time bin may belong to causally-disjoint sequences of activity (Fig 1a, bottom). If two spikes belonged to the same computation, then their downstream causal effects should eventually converge, meaning that they would belong to the same GNAT under our definition. Our GNAT analysis combines the temporal structure of spike trains with the connectivity structure of the neural network in a single object - the activity graph. Most spiking analyses work without connectivity information, but the connectivity is what gives meaning to spiking activity. Fig. 5b shows two temporally-similar spike trains. The two similar spike trains could emerge from distinct networks. In each case, the underlying synaptic interactions determining the temporal dynamics are distinct, illustrated by the distinct partitions of the top and bottom spike trains. These distinctions would be transparent to a population state vector analysis, but are intrinsic to the definition of the GNATs. The picture of spiking computation that emerges from the GNAT point of view is depicted in Fig. 5c. A recurrent spiking network sends and receives spikes to and from its external environment. These afferent/efferent spikes interact with ongoing activity threads to specify the computation. Our GNAT analysis allows for tracing the causal history of every spike, illustrating in detail how spiking networks transform their inference to efference. Despite the appealing properties of GNATs we observed, we have not connected GNATs to a specific neural computation. To do so will require a thorough understanding of GNATs and their interactions. Due to the combinatorially large number of possibile patterns of activity produced by spiking networks, this is a challenging problem. However, our initial results indicate that there is significant structure to the pattern of GNATs that emerge from a spiking network (Fig. 3b), so there is likely to be rich theory underlying GNAT dynamics. Furthermore, the connectivity of a network places strong constraints on possible threads, so a satisfactory theory of GNATs will further our understanding of the relation between spiking network structure and dynamics, an important research direction in neural computing. \(\Omega\) is strongly related to STDP facilitation kernels, which suggests interpreting edges in the causal activity graph as potential synaptic plasticity events. This directly links network activity and network connectivity: the connectivity determines potential GNATs, and the actualized GNATs determine the evolution of the connectivity. Future work will explore this connection in a learning context. ## 7 Simulation details We simulated spiking networks using the STACS spiking network simulator [12]. Networks had 4000 excitatory neurons (Izhikevich RS) and 1000 inhibitory neurons (Izhikevich FS) [7]. Neurons were randomly distributed on a \(100\mu m\times 100\mu m\) periodic rectangle (a torus), and connectivity was randomly initialized according to a probability that varied with the physical distance between neurons. \[p(r)=P_{\text{MAX}}\left(1-\frac{1}{1+\exp(-\sigma(r-\mu))}\right) \tag{6}\] E to E,I connections used \(P_{\text{MAX}}=0.4\), \(\mu=10\), \(\sigma=1\) and I to E connections used \(P_{\text{MAX}}=0.5\), \(\mu=10\), \(\sigma=1\). Excitatory conduction delays were chosen uniformly randomly in the range \([1ms,20ms]\) and inhibitory delays were fixed at \(1ms\). The networks evolved for 10 simulated minutes and weights were allowed to change according to an STDP rule [8]. Then, the weights were fixed and the network evolved for 5 minutes under the same stimulation conditions. GNATs were computed from this 5 minute simulation. Networks were driven with Poisson spikes applied to each neuron independently at a rate of 0.4 Hz. On top of this random stimulation, a randomly chosen but fixed spike pattern was applied to 100 randomly chosen neurons throughout the plastic and fixed phases of the simulation. The spike pattern consisted of random spikes at a rate of 2 Hz distributed over a 5 second interval, followed by a 5 second period of silence. Thus, networks experienced 60 repetitions of the pattern during the plastic phase and 30 repetitions during the fixed phase. ## Acknowledgments and Disclosure of Funding This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research (ASCR) as part of the Collaborative Research in Computational Neuroscience Program. This article has been authored by an employee of National Technology & Engineering Solutions of Sandia, LLC under Contract No. DE-NA0003525 with the U.S. Department of Energy (DOE). The employee owns all right, title and interest in and to the article and is solely responsible for its contents. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this article or allow others to do so, for United States Government purposes. The DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan [https://www.energy.gov/downloads/doe-public-access-plan](https://www.energy.gov/downloads/doe-public-access-plan). This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government. SAND2023-05685O
2301.12444
Exploring Attention Map Reuse for Efficient Transformer Neural Networks
Transformer-based deep neural networks have achieved great success in various sequence applications due to their powerful ability to model long-range dependency. The key module of Transformer is self-attention (SA) which extracts features from the entire sequence regardless of the distance between positions. Although SA helps Transformer performs particularly well on long-range tasks, SA requires quadratic computation and memory complexity with the input sequence length. Recently, attention map reuse, which groups multiple SA layers to share one attention map, has been proposed and achieved significant speedup for speech recognition models. In this paper, we provide a comprehensive study on attention map reuse focusing on its ability to accelerate inference. We compare the method with other SA compression techniques and conduct a breakdown analysis of its advantages for a long sequence. We demonstrate the effectiveness of attention map reuse by measuring the latency on both CPU and GPU platforms.
Kyuhong Shim, Jungwook Choi, Wonyong Sung
2023-01-29T13:38:45Z
http://arxiv.org/abs/2301.12444v1
# Exploring Attention Map Reuse for Efficient Transformer Neural Networks ###### Abstract Transformer-based deep neural networks have achieved great success in various sequence applications due to their powerful ability to model long-range dependency. The key module of Transformer is self-attention (SA) which extracts features from the entire sequence regardless of the distance between positions. Although SA helps Transformer performs particularly well on long-range tasks, SA requires quadratic computation and memory complexity with the input sequence length. Recently, attention map reuse, which groups multiple SA layers to share one attention map, has been proposed and achieved significant speedup for speech recognition models. In this paper, we provide a comprehensive study on attention map reuse focusing on its ability to accelerate inference. We compare the method with other SA compression techniques and conduct a breakdown analysis of its advantages for a long sequence. We demonstrate the effectiveness of attention map reuse by measuring the latency on both CPU and GPU platforms. Keywords:efficient transformer attention map reuse self attention speech recognition ## 1 Introduction The ability to learn long-range dependency is essential for various sequence processing tasks such as language modeling, machine translation, text summarizing, question answering, and speech recognition. Deep neural networks (DNNs) have been achieved great success in these complex sequence tasks over traditional handcrafted and rule-based techniques. DNN architectures can be characterized by how the feature extraction mechanism incorporates past or future information. For example, recurrent neural networks (RNNs) such as LSTM [10] encode the entire previous sequence into a single feature vector, which is beneficial for the compact implementation. However, it causes a loss of long-range information since feature representation is restricted to a single vector. In contrast, Transformer [24] models directly access the entire sequence, therefore they are much more advantageous for long-range dependency modeling. Transformer models have demonstrated excellent performance over RNNs and become the universal choice for most sequence processing applications. However, Transformer models suffer from quadratic computation and memory complexity to calculate the relationship between every pair of locations. More precisely, the self-attention (SA) module, one of two submodules in Transformer, utilizes an attention mechanism to process a sequence of length \(T\) at once. The attention mechanism computes the correlation between length \(T\) features by the matrix multiplication of the feature matrix (\(T\times d\)) and its transposed form (\(d\times T\)), where \(d\) is the feature vector dimension. This \(O(T^{2})\) quadratic hardware cost is especially problematic when deploying Transformer in practice, especially when the input sequence length is very long. For example, in speech recognition and language modeling, the length of the input sequence is very long and long-range dependency is crucial for accurate prediction. Language modeling models usually take about 256 to 512 previous words as input to predict the next word, and speech recognition models often process much more frames (750 frames for 30 seconds) to transcribe the given utterance. Although these two tasks are core building blocks of many applications, RNN models are still practically favorable because of the heavy computational cost of the Transformer models for resource-limited devices such as mobile and embedded systems. Considering that a Transformer model is composed of multiple Transformer layers, and each layer consists of multiple (attention) heads that exploit different all-to-all relationships, the complexity increases proportionally to the number of SA heads (\(H\)) and SA layers (\(L\)). Various architectural modifications have been proposed to reduce the heavy computation of SA, where the studies can be mainly categorized into two groups [23]. The first group (Section 2.2) focuses on the output of the attention mechanism, called attention map \(A\in\mathbb{R}^{T\times T}\), whereas the second group (Section 2.3) focuses on reducing the number of \(L\) and \(H\). The first group includes 1) computing only a few elements (sparsely) based on patterns or importance [2, 5], and 2) approximating \(A\) by low-rank factorization, clustering, or kernelization [27, 19, 6]. However, the aforementioned methods cannot utilize the full capability of the modern parallel processing hardware such as graphics processing unit (GPU), digital signal processor (DSP), and neural processing unit (NPU) [18, 13], due to their unstructured computation savings. For example, selectively computing a few important elements may produce a random-like access pattern that depends on the input. Clustering-based methods also require additional K-means or locality-sensitive hashing to group similar ones for a more accurate approximation. On the other hand, the second group reduces the effective number of attention map computations by pruning out SA heads or SA layers. These approaches are much more hardware-friendly because the removal of a large computation block (e.g., attention head, attention layer) is very structured and predictable. Recently, **attention map reuse** has been proposed for various applications, such as language modeling [30], machine translation [28], and speech recognition [22]. The key idea of the method is to reuse the attention map of \(\ell\)-th layer \(A^{\ell}\) for multiple consecutive layers, \((\ell+1)\)-th to \((\ell+M)\)-th layer, therefore reduc ing the effective number of SA computation from \(L\) to \(L/M\). This architectural change is highly structured and easy to implement on modern hardware platforms. Especially, for speech recognition, we showed that attention map reuse can be adopted without much degradation of recognition performance [22]. The paper discovered that the reason behind the success of this method is that SA blocks in successive layers perform a similar role for speech recognition and can be merged. However, the analysis on attention map reuse was mainly focused on their behaviors; not much discussion was provided on how the speedup is achieved in terms of the actual implementation. In this paper, we provide a deeper understanding of attention map reuse in the case of speech recognition. In Section 2, we briefly introduce Transformer and SA architecture and compare the previous SA compression methods with attention map reuse. In Section 3, we analyze the effect of each component of a Transformer model for speech recognition. In Section 4, we evaluate the latency savings of attention map reuse on various configurations and platforms. The results confirm that attention map reuse is a promising inference speedup technique for Transformer-based long-range sequence processing. ## 2 Background and Related Work ### Transformer and Self Attention We briefly introduce the components of a Transformer layer. A Transformer layer is composed of two submodules: 1) multi-head self-attention (MHSA) and 2) feed-forward (FF). Figure 1 illustrates the overall architecture, and Figure 2 visualizes the internal structure of MHSA submodule. Figure 1: Illustration of the Transformer-based model which is a stack of \(L\) Transformer layers. For speech recognition, input is human speech and output is a transcribed sentence. Note that every frame is processed together without recurrence. be a sequence of \(T\) tokens3. For the input \(X=\{x_{1},x_{2},...x_{T}\}\) where \(X\in\mathbb{R}^{T\times d}\), SA for the \(h\)-th head (total \(H\) heads) starts with three linear projections as: Footnote 3: We exploit the term ‘token’ as a common concept for both natural language processing and speech recognition. Each token represents a (sub-)word feature and a speech frame feature, respectively. \[Q_{h},K_{h},V_{h}=XW_{Q_{h},K_{h},V_{h}}+b_{Q_{h},K_{h},V_{h}}\quad(Q_{h},K_{h},V_ {h}\in\mathbb{R}^{T\times d_{h}}) \tag{1}\] where \(Q,K,V\) indicates the query, key, value, and \(W\in\mathbb{R}^{d\times d_{h}},b\in\mathbb{R}^{1\times d_{h}}\) are weight and bias parameters. \(d_{h}=d/H\) is the feature dimension for each attention head. The attention map \(A_{h}\) for the \(h\)-th head is then computed as a scaled dot-product of query and key \[A_{h}=\text{Softmax}(\frac{Q_{h}K_{h}^{T}}{\sqrt{d_{h}}}).\quad(A\in\mathbb{R} ^{T\times T}) \tag{2}\] After the softmax operation, each row of \(A_{h}\) becomes a probability distribution of a length \(T\). Intuitively, the \((i,j)\)-th element of the attention map represents how much \(j\)-th token contributes to \(i\)-th token. Then, the outputs of each attention head are concatenated and followed by another linear projection: \[\text{SA}_{h}(X) =A_{h}V_{h} \tag{3}\] \[\text{MHSA}(X) =\text{Concat}(\text{SA}_{1},...\text{SA}_{h})W_{O}+b_{O}. \tag{4}\] By exploiting multiple attention heads, Transformer can extract diverse relationships between tokens in a single layer. For example, one head may focus on the syntactic connections while the other head focuses on specific words. Figure 2: Illustration of the MHSA submodule. The computation flow is identical for each attention head. The FF submodule is a stack of two linear projections with an intermediate non-linear activation function: \[\mathrm{FF}(X)=\big{(}\phi(XW_{1}+b_{1})\big{)}W_{2}+b_{2} \tag{5}\] where \(W_{1}\in\mathbb{R}^{d\times 4d}\), \(W_{2}\in\mathbb{R}^{4d\times d}\), \(b_{1}\in\mathbb{R}^{4d}\) and \(b_{2}\in\mathbb{R}^{d}\) are parameters, and function \(\phi\) can be ReLU, Swish, GELU, etc. Finally, the output of \(\ell\)-th Transformer layer is formulated as below \[Z^{\ell} =\mathrm{LN}(\mathrm{MHSA}(X^{\ell})+X^{\ell}) \tag{6}\] \[X^{\ell+1} =\mathrm{LN}(\mathrm{FF}(Z^{\ell})+Z^{\ell}). \tag{7}\] where \(Z\) is the intermediate term and LN indicates the layer normalization [1]. There exist a residual connection that adds each submodule's input and output. As \(T\) increases, the cost of MHSA increases quadratically following \(O(T^{2})\) while the cost of FF increases linearly. Therefore, reducing the MHSA computation is very important for efficient realization of Transformer models. ### Attention Map Sparsification Numerous studies have proposed techniques to selectively compute elements of the attention map [23]. Patterned attention computation approaches, which select elements in a fixed manner, have been introduced [5, 3, 2]. For example, Sparse Transformer [5] exploits strided pattern that only attends \(\ell\) local positions and positions of stride \(\ell\), resulting in a \(O(T\sqrt{T})\) complexity. Depending on the pattern, these methods can be supported on modern accelerators with custom kernel implementation. Adaptive element selection approaches, which dynamically decide elements to compute, have also been studied [26, 12, 19]. For example, Reformer [12] exploits locality-sensitive hashing for clustering so that the attention map can be computed only within the grouped elements. However, these clustering-based methods require additional computation steps and the access positions dynamically change depending on the input. The aforementioned studies focus on how to reduce the cost of the attention dot product, while not changing the overall structure of the Transformer model. ### Removing Attention-related Blocks To build an efficient SA mechanism, many studies have focused on the structured removal of a large chunk of computation blocks. The candidate for the removal (pruning) can be attention heads or attention layers. For attention head pruning, previous studies have reported that pruning out certain attention heads does not affect the final performance [15, 25, 34]. Specifically, redundant or less important attention heads in SA can be pruned without degrading the performance for speech recognition [32] and auto-regressive language modeling [21]. Similarly, layer-level pruning has also been studied [8, 20, 11] for natural language processing tasks. LayerDrop [8] randomly omits the residual connection during training to make the model more robust to layer pruning. Our goal is to provide a comprehensive understanding on the impact of the specific structured removal approach, attention map reuse. ## 3 Transformer for Speech Recognition ### Conformer Architecture Transformer-based models have been actively employed for state-of-the-art speech recognition, replacing the previous RNN-based or CNN-based models [16, 33] thanks to their ability to extract informative features from the input utterance. In particular, previous works discovered that SA automatically learns to extract useful phonological features [29, 22] during training. The input of a Transformer model is a sequence of audio features (frames) extracted by short-time Fourier-transform (STFT), and the output is a sequence of transcribed words. Because the input and output domains are different, the model internally performs a transformation that turns audio features into text features while passing through a stack of Transformer layers. Following the previous work [22], we employ Conformer [9], a variant of Transformer widely used for speech recognition. Conformer consists of 4 submodules as illustrated in Figure 3. The main architectural difference between Conformer and Transformer is that Conformer includes two more submodules: an additional FF submodule at the front and the intermediate convolutional (Conv) submodule. Considering that speech is a continuous signal and nearby frames are highly dependent on each other, the convolutional module is beneficial for enhancing the local relationship between frames that might not be emphasized in SA. By utilizing both SA and Conv, Conformer achieved a state-of-the-art recognition performance with much fewer parameters than the Transformer-based model without the Conv submodule [9]. ### Breakdown Analysis To understand the bottleneck of Transformer-based models, we analyze how much resource each submodule takes. We first show the parameter size of each component in Figure 4. We consider the Conformer-M model, which consists of \(L\)=16 layers of hidden dimension of \(d\)=256 and \(H\)=4 attention heads. We Figure 3: Illustration of a Conformer layer consists of 4 submodules. can observe that about 66% of parameters come from the FF submodule and SA takes only 21% of parameters. The reason for this imbalance is that each FF includes \(8d^{2}\) parameters (therefore, \(16d^{2}\) parameters for two FFs) while SA has \(5d^{2}\) parameters. In other words, we need to reduce the FF parameters if the target system does not equip enough memory space. However, the slowest submodule is not FF but SA. Figure 5 presents the inference speed breakdown for different input sequence lengths (we will discuss ReuseSA in the next section). Note that the \(x\)-axis of the figure indicates the number of feature frames extracted with a 40ms stride. The lengths 256, 512, and 1024 can be reinterpreted to about 10, 20, and 40-second input audio lengths, respectively. The results are evaluated on a single RTX-Titan GPU, but the tendency should be similar for CPU platforms. As input length increases, SA requires a quadratic computation cost while FF and Figure 4: The ratio of the parameter size occupied by each Conformer submodule. The values are based on the Conformer-M model. Figure 5: Inference time (us) of each Conformer submodule. The order of vertical bars is FF, Conv, SA, and ReuseSA. Attention map reuse replaces the SA module with the ReuseSA module and significantly reduces the inference time. \(x\)-axis is the number of frames in the input sequence. Conv only need linearly increasing costs. Therefore, SA takes 45% of total inference time for a 10-second input but 67% for a 30-second input. This is highly problematic because many speech-related tasks, such as conference transcription, listening comprehension, and speech-to-speech translation, often process much longer utterances than 30-second. ## 4 Evaluation of Attention Map Reuse ### Attention Map Reuse From the observation that the behavior of SA is very similar between neighboring layers, attention map reuse only computes a single attention map through \(M\) consecutive SA layers. Figure 6 illustrates the attention reuse procedure. For example, if the attention map from \(1^{\text{st}}\) layer \(A^{1}\) is shared through layers \(2\sim M\), the SA output of those layers can be easily computed as below: \[\text{SA}_{h}^{i}(X^{i})=A_{h}^{1}V_{h}^{i},\quad i\in\{2,3,...,M\} \tag{8}\] where \(i\) is the layer index and \(V_{h}^{i}\) is the value computed from each layer. In other words, attention map reuse groups a set of layers that includes one original SA layer at the front and following \(M-1\) reuse SA layers. The remaining parts of the model, such as FF and Conv submodules, are unchanged. By omitting the attention map computation, the effective number of SA calculations can be decreased by \(M\) times. Instead of optimizing each SA mechanism, attention map reuse exploits the characteristic of SA and proposes a new axis for Transformer compression. Figure 6: Illustration of each attention head for attention map reuse, in case of the \(4\times 4\) reuse configuration. The computed attention map \(A^{\ell}\) from \(\ell\)-th layer is reused for the next three layers. For simplicity, other submodules are omitted. After applying attention map reuse, the query and key are not needed for the following SA layers and their associated parameters can be removed. To compensate for the reduced parameter size, previous work suggested increasing the hidden dimension of value [22]. We follow the same strategy, which makes the size of \(W_{V_{h}}\) from \(\mathbb{R}^{T\times d_{h}}\) to \(\mathbb{R}^{T\times 2d_{h}}\), and the size of \(W_{O}\) from \(\mathbb{R}^{d\times d}\) to \(\mathbb{R}^{2d\times d}\) (see Figure 6). The number of attention heads is preserved. Note that attention map reuse is not a fine-tuning approach that starts from the fully converged model; we train the modified model from scratch. ### Reuse Analysis We analyze the effect of attention map reuse using different configurations. Tables 1 and 2 show the inference time of different reuse configurations. The configuration \(A\times B\) indicates that \(A\) layers are grouped to share one attention map and total \(B\) groups of layers exist. The baseline is a 16-layer Conformer-M model [9], which can be represented as the configuration of \(1\times 16\). The results are measured on an RTX Titan GPU and an Intel Xeon Gold 6130 CPU. We evaluate three different attention map reuse configurations, \(2\times 8\), \(4\times 4\), and \(8\times 2\) where the number of attention map computations is 8, 4, and 2, respectively. Although we can combine different numbers of layers to form heterogeneous groups, we find that the number of unique attention map computations determines the total latency of the model. Note that there is a trade-off between the performance and the inference speed if the number of unique attention map computations is too small (see Section 4.3). \begin{table} \begin{tabular}{c|c c c c c c c} \hline \hline Config. & 128 & 256 & 384 & 512 & 640 & 768 & 896 & 1024 \\ \hline \(1\times 16\) & 3.32 & 4.16 & 7.66 & 11.77 & 17.11 & 23.10 & 30.27 & 37.82 \\ \(2\times 8\) & 2.40 & 3.38 & 5.89 & 8.61 & 12.11 & 15.87 & 20.27 & 24.51 \\ \(4\times 4\) & 1.99 & 2.91 & 4.92 & 6.93 & 9.53 & 12.18 & 15.28 & 17.98 \\ \(8\times 2\) & 1.25 & 2.56 & 4.11 & 5.75 & 7.72 & 10.03 & 12.66 & 14.63 \\ \hline \hline \end{tabular} \end{table} Table 1: GPU Inference time (ms) of a speech recognition model with different attention map reuse configurations. The numbers from 128 to 1024 indicate the frames in the input utterance. Configuration \(1\times 16\) represents the baseline model. Each frame corresponds to a 40ms stride. \begin{table} \begin{tabular}{c|c c c c c c c c} \hline \hline Config. & 128 & 256 & 384 & 512 & 640 & 768 & 896 & 1024 \\ \hline \(1\times 16\) & 62.97 & 82.93 & 114.44 & 164.48 & 226.88 & 364.05 & 398.36 & 1033.59 \\ \(2\times 8\) & 54.08 & 70.84 & 87.05 & 124.01 & 152.35 & 255.58 & 315.91 & 630.30 \\ \(4\times 4\) & 50.24 & 63.72 & 77.21 & 100.87 & 135.63 & 182.08 & 231.14 & 398.57 \\ \(8\times 2\) & 42.66 & 55.20 & 67.39 & 96.28 & 116.06 & 136.54 & 191.17 & 281.11 \\ \hline \hline \end{tabular} \end{table} Table 2: CPU Inference time (ms) of a speech recognition model with different attention map reuse configurations. We observe a clear improvement in the inference speed for both GPU and CPU as more layers are grouped. For \(4\times 4\) configuration, 10-second (length 256), 20-second (length 512) and 40-second (length 1024) utterance saves about 38%, 41%, and 52% of the GPU inference time, respectively. The same configuration saves about 33%, 39%, and 61% for the CPU inference time. CPU inference time gain is not as good as that of GPU when the length is short but provides a higher benefit when the length is long. Figure 5 also demonstrates the effect of reuse in the case of \(4\times 4\). Comparing the original SA and the reuse SA, the reuse SA takes about 35% to 45% of the latency of the original SA. ### Discussion #### 4.3.1 Attention Map Reuse in Natural Language Processing Attention map reuse was applied for neural machine translation [28] and BERT-based language modeling [30]. For machine translation, the average speedup was about 1.3 times without considerable performance degradation [28]. The inference speedup is less than in speech recognition because machine translation usually considers a relatively shorter sequence length \(T\) (shorter than 50) and larger hidden dimension \(d\). If \(d\) is much larger than \(T\), the \(O(T^{2})\) SA computation does not dominate the total inference cost, so the advantage from attention map reuse is limited. For the language model inference, the speedup was about 1.3 times with a marginal performance gain [30]. However, the work mainly focused on the \(2\times 6\) configuration because the more aggressive reuse configuration did not achieve satisfactory performance for GLUE downstream tasks. We conclude that the efficiency of attention map reuse can be maximized when 1) the expected input sequence length is longer than the hidden dimension, and 2) the target task is not very sensitive to a small attention map variation. Speech recognition well fits these conditions because it handles very long sequences and the frame features change slowly along the time axis. #### 4.3.2 Effect on Performance In Table 3, we show the word error rate (WER) of attention map reuse on LibriSpeech [17] speech recognition dataset, which includes four evaluation data subsets4. We borrow the result from our original paper [22] on speech recognition to briefly show that attention map reuse can be \begin{table} \begin{tabular}{c|c|c c c c} \hline Config. & \#Param (M) & _dev-clean_ & _dev-other_ & _test-clean_ & _test-other_ \\ \hline \(1\times 16\) & 25.45 & 3.1 & 8.3 & **3.2** & 8.4 \\ \(2\times 8\) & 24.92 & **3.0** & **8.2** & 3.3 & **8.2** \\ \(4\times 4\) & 24.66 & **3.0** & **8.2** & 3.3 & **8.2** \\ \(8\times 2\) & 24.52 & 3.3 & 8.8 & 3.6 & 8.7 \\ \hline \end{tabular} \end{table} Table 3: Word error rate (%) for different attention map reuse configurations. employed without affecting the performance. In short, recognition performance is almost preserved for \(2\times 8\) and \(4\times 4\) configurations but not for the \(8\times 2\) case. The authors suggested that the capacity may become insufficient to internalize the necessary information for every layer in a group when too many layers share the same attention map. Note that the number of parameters is almost the same for configurations because we doubled the dimension of value (see Section 4.1). #### 4.2.3 Attention Computation Reduction in Speech Recognition Many works have been proposed techniques to reduce the computational cost of SA, especially for speech recognition. Local windowing is the common approach that only exploits a limited range of frames for attention map computation. For example, each frame may only consider neighboring frames (e.g., only accessing past 64 and future 64 frames) as candidates of the attention mechanism; this approach decreases the complexity of SA from \(O(T^{2})\) to \(O(TR)\), where \(R\) is the number of accessible frames. However, restricting the range often lowers the recognition performance over full sequence-based models [31, 7]. On the other hand, several studies designed more efficient Transformer models for speech recognition [4, 14]. These approaches, including faster query-key dot product [14] and time-strided SA [4], are orthogonal to attention map reuse and can be used together. ## 5 Conclusion In this paper, we analyzed a recently proposed efficient SA compression method, named attention map reuse, for Transformer-based speech recognition. We first perform a detailed analysis of the inference bottleneck of the Conformer model used for speech processing, evaluated on a wide range of input sequence lengths. Our analysis provides a thorough understanding on the burden of SA when using Transformer in practice. Then, we demonstrate the computational savings from attention map reuse on GPU and CPU platforms. We claim that attention map reuse is a very promising method for utilizing Transformer-based models on modern hardware systems.
2305.01871
Convolutional neural network-based single-shot speckle tracking for x-ray phase-contrast imaging
X-ray phase-contrast imaging offers enhanced sensitivity for weakly-attenuating materials, such as breast and brain tissue, but has yet to be widely implemented clinically due to high coherence requirements and expensive x-ray optics. Speckle-based phase contrast imaging has been proposed as an affordable and simple alternative; however, obtaining high-quality phase-contrast images requires accurate tracking of sample-induced speckle pattern modulations. This study introduced a convolutional neural network to accurately retrieve sub-pixel displacement fields from pairs of reference (i.e., without sample) and sample images for speckle tracking. Speckle patterns were generated utilizing an in-house wave-optical simulation tool. These images were then randomly deformed and attenuated to generate training and testing datasets. The performance of the model was evaluated and compared against conventional speckle tracking algorithms: zero-normalized cross-correlation and unified modulated pattern analysis. We demonstrate improved accuracy (1.7 times better than conventional speckle tracking), bias (2.6 times), and spatial resolution (2.3 times), as well as noise robustness, window size independence, and computational efficiency. In addition, the model was validated with a simulated geometric phantom. Thus, in this study, we propose a novel convolutional-neural-network-based speckle-tracking method with enhanced performance and robustness that offers improved alternative tracking while further expanding the potential applications of speckle-based phase contrast imaging.
Serena Qinyun Z. Shi, Nadav Shapira, Peter B. Noël, Sebastian Meyer
2023-05-03T03:09:06Z
http://arxiv.org/abs/2305.01871v1
# Convolutional neural network-based single-shot speckle tracking for x-ray phase-contrast imaging ###### Abstract X-ray phase-contrast imaging offers enhanced sensitivity for weakly-attenuating materials, such as breast and brain tissue, but has yet to be widely implemented clinically due to high coherence requirements and expensive x-ray optics. Speckle-based phase contrast imaging has been proposed as an affordable and simple alternative; however, obtaining high-quality phase-contrast images requires accurate tracking of sample-induced speckle pattern modulations. This study introduced a convolutional neural network to accurately retrieve sub-pixel displacement fields from pairs of reference (i.e., without sample) and sample images for speckle tracking. Speckle patterns were generated utilizing an in-house wave-optical simulation tool. These images were then randomly deformed and attenuated to generate training and testing datasets. The performance of the model was evaluated and compared against conventional speckle tracking algorithms: zero-normalized cross-correlation and unified modulated pattern analysis. We demonstrate improved accuracy (1.7 times better than conventional speckle tracking), bias (2.6 times), and spatial resolution (2.3 times), as well as noise robustness, window size independence, and computational efficiency. In addition, the model was validated with a simulated geometric phantom. Thus, in this study, we propose a novel convolutional-neural-network-based speckle-tracking method with enhanced performance and robustness that offers improved alternative tracking while further expanding the potential applications of speckle-based phase contrast imaging. machine learning, x-ray. ## I Introduction X-ray phase-contrast imaging (PCI) has proven to be a powerful technique for non-destructive material testing and biomedical imaging [1, 2]. While conventional x-ray imaging relies on absorption in high-density materials for signal generation, PCI measures changes in the wavefront (phase shift) when x-rays pass through an object. For typical x-ray energies and materials of low atomic numbers - such as human tissue - the generation of phase shift is several orders of magnitude larger than absorption. Therefore, for weakly-attenuating materials, PCI provides enhanced sensitivity (i.e., visualization of soft-tissue contrast) that is inaccessible in conventional x-ray imaging. The clinical potential of PCI has been demonstrated for a wide range of pathologies and anatomical sites, such as the musculoskeletal system [3], central nervous system [4], breast [5], and vasculature [6]. Although various solutions for sensing x-ray phase information have been developed in the last decades [7], their widespread clinical application is still limited to prototypes [8, 9, 10]. Typical PCI systems utilize 1D or 2D gratings (grating interferometry [11, 12]) and grids to produce a periodic reference interference pattern in the detector plane. However, the translation of these PCI systems from research laboratories to clinical centers faces major obstacles because of high coherence requirements, complex optical systems for translation of phase shifts into measurable intensity variations, and phase-wrapping effects from periodic reference patterns [13]. Speckle-based x-ray phase contrast imaging (XPCI) [13, 14, 15, 16] is a recently proposed method for phase-contrast and dark-field imaging that utilizes x-ray near-field speckles generated from a random diffuser. The principle of XPCI is schematically shown in Fig. 1. Coherent x-rays impinge on the diffuser, randomly scatter, and mutually interfere with the incident beam to create a random intensity pattern, named the reference image. Sample-induced phase shifts cause refraction that translates into a transverse displacement of the original speckle pattern to generate the sample image. The sample image can then be compared to the reference to calculate the corresponding phase contrast signal of the object [13]. XPCI overcomes several of the limitations of typical PCI systems as it offers a simple setup with excellent dose efficiency, only has moderate coherence requirements, does not require precise system alignment, and negates the propagation distance restrictions imposed by fractional Talbot distances [15, 17]. This is crucial for preclinical PCI systems, potentially used for small animal imaging, due to less stringent requirements for small detector pixels and long propagation distances compared to laboratory setups. The key to obtaining high-quality phase contrast images from XPCI systems is accurate tracking of the sample-induced speckle pattern modulations. Out of all speckle tracking modes [13], single-shot speckle tracking (XST), i.e., using only one reference and sample image pair, is desirable for a preclinical translation since it allows a fast and dose efficient acquisition with a stationary diffuser. Several algorithms have been successfully developed for XST. Zero-normalized cross-correlation (ZNCC) [14] and unified modulated pattern analysis (UMPA) [16, 18, 19, 20] are direct tracking algorithms based on windowed image correlation. Although both algorithms produced impressive results, the trade-off between spatial resolution and angular sensitivity requires careful selection of the window size [19]. One possible solution to overcome this limitation is the optical flow method (OF) [21]. By implicitly tracking speckles, i.e., without the use of a correlation window, measuring displacement fields can be considered an optical flow problem through geometrical-flow conservation. A study by Rouge-Labriet _et al._[22] established that of the three speckle tracking techniques, the OF method provided the best qualitative image quality and the lowest naturalness image quality evaluator score with a reduced number of sample exposures for low dose PCI with both theoretical and biomedical sample models. However, this method depends on the assumption that the sample is transparent to x-rays and utilizes a high-pass filter which can result in image artifacts and affect the quantitative accuracy. Convolutional neural network (CNN) have been successfully implemented for various problems in computer vision [23], focusing on classification [24], segmentation [25], and registration [26]. More recently, CNNs have been extended to the general optical flow problem, defined as the pattern of apparent motion of objects between two frames, using deep learning architectures [27, 28]. FlowNet [29] and its variants [27] use the multiscale loss function for optimization and are U-shaped with contracting and expanding paths. FlowNet2 [28], a fusion network generated by stacking different FlowNet variants, achieved superior performance compared to traditional optical flow algorithms. This architecture has been successfully utilized for displacement estimation in ultrasound elastography [27] and various applications in civil engineering [30]. However, compared to these applications, the sample-induced displacement for x-ray speckle tracking is typically much smaller at subpixel levels. The StrainNet architecture, designed by Boukhtache _et al._[23], can retrieve dense displacement and strain fields from optical images of an object exposed to mechanical compression. StrainNet has successfully demonstrated comparable results in retrieval performance and computing time compared to traditional algorithms. With its ability to perform subpixel displacement retrievals, StrainNet could be a promising solution for XST. In this paper, we present the CNN-based Analysis for Displacement Estimation (CADE), an extension of the StrainNet CNN algorithm to track x-ray speckles in XPCI. Intrinsic performance characteristics for CADE were quantitatively investigated using standard criteria for digital image correlation and compared to established x-ray speckle tracking algorithms. In addition, the performance of CADE for speckle-based PCI was evaluated using numerical wave-optics simulations. ## II Methods ### _Wave-optics simulation_ Numerical wave-optics simulations were performed using a previously-developed in-house Python simulation framework [31]. The simulation process relied on an iterative use of the angular spectrum method to propagate the wave-field from the source through the speckle-based imaging setup. The disturbance of the wave-field due to the presence of an object (i.e., attenuation and phase-shift) was then calculated in projection approximation. All simulations were conducted with the following configuration. A monochromatic 30 keV point source with a 10 \(\upmu\)m focal spot was simulated. The diffuser was located 1 m away from the source and was modeled as 10 layers of sandpaper sheets, each consisting of a rough aluminum oxide (\(Al_{2}O_{3}\)) surface with a 200 \(\upmu\)m backing of diethyl pyrocarbonate (\(C_{6}H_{10}O_{5}\) Figure 1: Principle of speckle-based phase-contrast imaging **(A)** A random intensity pattern (red solid line) is shown in comparison to the displaced intensity pattern in the sample image (gray dashed line). **(B)** and **(C)** show a reference and sample image, respectively, and demonstrate a sub-pixel displacement of the speckle marked by the red arrow. [31]. The detector was located 3 m away from the source and had an effective pixel size of 12 \(\upmu\)m, utilizing a point spread function of 1/2.355 pixels. As the diffuser is simulated with different surface structures, the resulting speckle sizes represented by the full width half maximum (FWHM) of the speckle pattern autocorrelation function ranged from 22 \(\upmu\)m to 110 \(\upmu\)m, or approximately 2 to 10 pixels at the detector level. This range offered a minimum speckle visibility of 20% and a good representation of expected speckle sizes. ### Data augmentation Data augmentation for supervised training and network architecture details are shown in Fig. 2. As described in Section II-A, wave-optics simulations were utilized to obtain 364 independent 256 x 256 pixel reference speckle images. Random piece-wise smooth deformations were applied to each reference image using one of six deformation patch sizes (4, 8, 16, 32, 64, or 128 pixels) with displacements ranging from -1 to +1 pixel. The deformation patch size indicates the distance of linear interpolation of displacements, i.e., a patch size of 8 x 8 pixels corresponded to independent 8 x 8 patches of smooth displacements in only one direction (either all positive or all negative). The random deformations were applied to each reference image to generate the corresponding sample image. Sixty and 10 independent deformations were used for each reference image to generate the network training and testing datasets, respectively. The generated displacement values follow a normal distribution centered around 0 and ranging from -1 to +1 pixels. We randomly selected 0.5% of all image pairs to utilize each of the following deformation maps: (1) identity maps (i.e., all displacements equal zero); (2) constant displacement (i.e., the same displacement value across the entire map) in x direction and an identity map in y direction; (3) vice versa of (2); and, finally, (4) constant displacements in both x and y directions. This resulted in 98% of the data having random deformations, and the remaining 2% underwent identity or constant displacement maps. The constant displacements within \(\pm\)0.15 pixels were included to improve CADE's performance at extremely small displacements. Additional processing of image pairs included noise and attenuation. First, individual Poisson noise maps were generated and applied for reference and sample images. All sample images were then randomly attenuated to mimic 50 - 100% transmission in the same manner as the deformation patches. This resulted in a training and testing dataset of 21841 and 3640 image sets, respectively. Each data set comprises a reference image, sample image, and ground truth displacement field. ### CNN architecture and training #### Ii-C1 Network architecture The StrainNet-f architecture [23] adapted for XPCI (Fig. 2) is an end-to-end full-resolution network consisting of two main components. The first component extracted feature maps with successive convolutional layers. The 10 convolutional layers include 7 x 7 filters for the first, 5 x 5 filters for the second and third, and 3 x 3 filters for the remaining seven. The latter portion predicts displacement fields via five convolutional layers Figure 2: Schematic view of data augmentation and network architecture. Percentages show the proportion of data that underwent the respective processing. Examples of reference and displacement images are shown on the left (gray arrows). The feature extraction level and displacement field prediction level were both performed four times. ReLU stands for rectified linear unit. Down-sampling and up-samplings were performed at a stride of 2. with 3 x 3 filters and eight transposed convolutional layers. The architecture simplified FlowNetS with four down-samplings and four up-sampling. The same loss function and levels in FlowNetS were used. #### Iii-A2 CNN training The hyperparameters of the network were initially set to the original StrainNet-f configuration [23] and further fine-tuned via grid search to maintain equal or improved model convergence and faster training times. The final values of each hyperparameter are reported in Table 1. Model convergence was evaluated with training and testing endpoint error (EPE), which is the Euclidean distance between the predicted and ground truth displacement vectors normalized over all pixels: \[L_{\text{\emph{e}pe}}=\frac{1}{N}\sum_{\forall\left(\mathbf{u}_{GT}(x,y)- \mathbf{u}(x,y)\right)^{2}},\left(1\right)\] where \(N\) denotes the total number of pixels in the image, \(\mathbf{u}_{GT}(x,y)\) is the ground truth displacement of each pixel, and \(\mathbf{u}(x,y)\) is the estimated displacement of pixel \(x,y\)[23]. The training was performed with four cores of an NVIDIA Tesla T4 16GB GPU at a runtime of 58 hours. An EPE of 0.050 for training and 0.113 for testing was achieved after 350 epochs. ### State-of-the-art speckle tracking algorithms To reconstruct the (differential) phase-contrast image, the displacement vector field \(\left(\mathbf{u}(x,y)\right)\) between the sample (\(\mathbf{I_{s}}\)) and reference (\(\mathbf{I_{r}}\)) image must be determined by locally tracking the speckle pattern modulation. Two conventional speckle tracking algorithms were examined: zero-normalized cross-correlation (ZNCC) [32] and unified modulated pattern analysis (UMPA) [18, 19, 33]. #### Iii-D1 Zinc Small patches of size (\(2M+1\)) \(\times\) (\(2M+1\)) from \(\mathbf{I_{r}}\) were compared against a template region in \(\mathbf{I_{s}}\). The relative transverse displacement for the central pixel (\(\mathbf{x}_{0},\mathbf{y}_{0}\)) of the template was determined as the shift with the highest correlation coefficient: \[=argmax_{u_{x},u_{y}}\left\{\frac{\sum_{i,j}\left[\mathbf{I^{\prime}_{s}}(x_{ i},y_{j})\mathbf{I^{\prime}_{r}}(x_{i}+u_{x},y_{j}+u_{y})\right]}{\left[\sum_{i,j} \mathbf{I^{\prime}_{s}}(x_{i},y_{j})^{2}\sum_{i,j}\mathbf{I^{\prime}_{r}}(x_{ i}+u_{x},y_{j}+u_{y})^{2}\right]}\right\}, \tag{2}\] where \(\mathbf{I^{\prime}_{r}}\) and \(\mathbf{I^{\prime}_{s}}\) are the normalized reference and sample images obtained by subtracting the mean value of the patch. The summation was performed over all pixels in the corresponding patch. Sub-pixel precision was obtained by Gaussian fitting to the peak in the correlation map. #### Iii-D2 Umpa A physical model is used to describe the influence of the sample on the speckle pattern in terms of transmission T and transverse speckle displacements (\(u_{x},u_{y}\)). All signals are extracted with a windowed least-square minimization between the model and the measured sample image \(\mathbf{I_{s}}\): \[\mathbf{u}(x_{0},y_{0})=argmin_{u_{x},u_{y}}\sum_{i,j}w\left(x_{i},y_{j}\right)\] \[\times\left\{I_{s}\left(x_{i},y_{j}\right)-T\left(x_{i},y_{j}\right)I_{r} \left(x_{i},y_{j}\right)I_{r}\left(x_{i}+u_{x},y_{j}+u_{y}\right)\right\}^{2}, \tag{3}\] where \(w\) is a windowing function of (\(2M+1\)) \(\times\) (\(2M+1\)) pixels centered on (\(x_{0},y_{0}\)). Sub-pixel precision was achieved through a paraboloid fit of the neighborhood of the minimum. ### Performance validation and comparison Ten independent reference images were generated with speckle parameters described in Section II-B and used for validation and comparison evaluations. Different types of displacement maps were applied for each evaluation. Poisson noise and transmission of 90% were applied to all image pairs unless otherwise stated. All algorithms were run on a MacBook Pro with an Apple M1 chip. #### Iii-D1 Speckle tracking accuracy Before adding noise and attenuation, each reference image was transformed with a constant displacement map ranging from 0 to 1 pixel at steps of 0.1 pixels. Bias and root mean squared difference (RMSE) of the retrieved displacement fields were calculated as [34, 35], \[\begin{split} Bias(x,y)=\left(\mathbf{u}_{GT}(x,y)-\frac{\sum \mathbf{u}(x,y)}{N}\right)+\mathbf{u}_{GT}(x,y),\left(4\right),\\ RMSE(x,y)=\sqrt{\frac{\sum_{x,y=1}^{N}(\mathbf{u}(x,y)-\mathbf{u}_{ GT}(x,y))^{2}}{N}},\left(5\right).\end{split}\] \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline \multicolumn{6}{|l|}{**Training Hyperparameters**} \\ \hline **Bias Decay** & 0 & **Epoch Size** & 0 & **Weight Decay** & 0.0004 \\ \hline **Solver Algorithm** & Adam & **Batch Size** & 16 & **Algorithm** & StrainNet\_f \\ \hline **Div Flow** & 2 & Learning Rate & 0.001 & **Multiscale Weights** & [0.005, 0.01, 0.02, 0.08, 0.32] \\ \hline **Epochs** & 350 & **Momentum** & 0.9 & **Data Loading Workers** & 8 \\ \hline **Starting Epoch** & 0 & **Beta** & 0.999 & **Milestones** & [40, 80, 120, 160, 200, 240] \\ \hline \end{tabular} \end{table} Table 1: **Network hyperparameters used for training. The highlighted parameters deviated from the initial configuration. Beta corresponds to the beta parameter for the Adam solver algorithm. Div Flow represents the value by which the flow will be divided every 40 epochs to decrease the runtime.** 2 Spatial resolution The spatial resolution of each algorithm was evaluated using a star displacement map, which consisted of a unidirectional sinusoidal displacement with linearly increasing frequency toward the left. A key characteristic of the pattern is the constant amplitude of 0.5 pixel across the horizontal symmetry axis in the center of the image. Limiting spatial resolution for each algorithm was defined by the frequency at a bias of 10% [23]. #### Ii-A3 Noise, window size dependency, and computational time Reference images were deformed by a gradient map ranging from 0 to +1 pixel displacement. The effect of noise was then evaluated by applying seven different noise levels ranging from a loss of 0-6% signal-to-noise ratio (SNR) in the same manner as in Section II-B. RMSE and spatial resolution were calculated for UMPA and ZNCC with window sizes between 10 and 50 pixels and compared to CADE to examine window size dependency. Finally, computational times were evaluated using 10 image pairs of 256 x 256, 512 x 512, and 768 x 768 pixels. #### Ii-A4 Method validation The wave-optics simulation was used to validate our speckle tracking method for imaging data obtained from an XPCI acquisition of with the setup described in Section II-A. The simulated polymethyl methacrylate object consisted of a 1500 \(\upmu\)m wide rectangular base with a thickness profile modulated in the x direction by a sine wave. Hence, the thickness \(t(x,y)\) of the sample is represented by \[t(x,y)=b+A\cdot\sin(wx),(6)\] where \(b=1000\)\(\upmu\)m is the base thickness of the sample and \(A=800\)\(\upmu\)m and \(w=1.33\times 10^{-3}\)\(\upmu\)m\({}^{-1}\) are the amplitude and frequency of sinusoidal thickness modulation, respectively. The gradient of the phase shift \(\Phi\) introduced by the sample is proportional to the refraction angle \(\alpha=(\alpha_{x},\alpha_{y})\), where \(x\) and \(y\) are the transverse coordinates orthogonal to the optical axis. This can in turn be geometrically related to the speckle displacement vector \(\mathbf{u}\left(x,y\right)=\left(u_{x}\left(x,y\right),u_{y}\left(x,y\right)\right)\) (see Fig. 1) in small-angle approximation: \[\left(\frac{\partial\Phi}{\partial x},\frac{\partial\Phi}{\partial y}\right) =\frac{2\uppi}{\lambda}\big{(}\alpha_{x},\alpha_{y}\big{)}=\frac{2 \uppi}{\lambda}\big{(}u_{x},u_{y}\big{)}\frac{p}{d} \tag{7}\] where \(\lambda\) is the x-ray wavelength, \(p\) is the detector pixel pitch, and \(d\) is the sample-detector distance [36]. The phase shift can also be calculated from the sample properties as \[\mathbf{\Phi}(x,y)=-\frac{2\pi\delta}{\lambda}t(x,y) \tag{8}\] using the refractive index decrement \(\delta\) of the sample's material. Hence, the ground truth refraction angle \(\boldsymbol{\alpha}_{x}(x,y)\) for this sample is given by \[\boldsymbol{\alpha}_{x}(x,y)=\frac{\lambda}{2\pi}\frac{\partial\Phi(x,y)}{ \partial x}=-\delta\frac{\partial t(x,y)}{\partial x}\] \[=-\delta Aw\cdot\cos(wx). \tag{9}\] computational time and image size. CADE resulted in shorter computational times, and its advantage increased substantially with larger image sizes. At an image size of around \(6\times 10^{5}\) pixels, CADE had an average runtime of 18 s, whereas the runtimes for ZNCC and UMPA were three (53 s) and ten times (181 s) longer, respectively. Displacement maps and reconstructed refraction angle for the sine wave sample are shown in Fig. 6. Qualitatively, CADE-generated displacement maps appeared less noisy and in better agreement with the ground truth than for conventional algorithms. In Fig. 6B, CADE achieved improved accuracy compared to ZNCC and UMPA, particularly at the peaks located at \(\pm 2\) urad. The displacement maps RMSE was 4.10, 4.37, and \(4.30\times 10^{-2}\) pixels for CADE, UMPA, and ZNCC, respectively. ## IV Discussion XPCI presents a cost-effective method with moderate coherence requirements for enhanced sensitivity compared to conventional x-ray systems. However, accurately tracking sample-induced speckle pattern modulations is crucial for obtaining high-quality phase contrast images. Although several methods have been proposed for this task, they often necessitate a trade-off between tracking accuracy and spatial resolution or rely on several assumptions. To overcome these limitations, we present CADE, a novel windowless CNN-based speckle tracking algorithm, and compare and validate its performance against conventional algorithms. The key findings from this study are: (1) successful application of CADE for speckle tracking, (2) improved tracking performance compared to Fig. 4: Evaluation of the speckle tracking spatial resolution with star pattern displacement maps. **(A)** High frequency area of the star pattern and corresponding displacement maps obtained from the speckle tracking algorithms. Scalebar represents 0.1 mm. **(B)** Spatial resolution of CADE and conventional algorithms. True (thin dotted line) and moving averaged (thick solid line) displacement bias of each method is shown versus the spatial wavelength of the star pattern. Spatial resolution is defined by 10% bias (black dotted line). Fig. 5: Comparison of displacement RMSE **(A)** and spatial resolution **(B)** of conventional algorithms for increasing correlation window size. The CADE results do not dependent on window size. Mean (marker) and standard deviation (error bar) are obtained from evaluating ten image pairs for each window size. CADE achieved improved RMSE for all window sizes and improved spatial resolution for window sizes greater than 15 pixels. conventional algorithms, and (3) greatly reduced computational time. Most importantly, CADE achieved superior performance particularly at high refraction angles when validated on a simulated object. Compared to CADE, current state-of-the-art algorithms like UMPA and ZNCC are window-based algorithms. Unlike these extrinsic approaches, or iterative pixel-wise algorithms, intrinsic speckle tracking algorithms rely on solving a partial differential equation formulated at the whole-image level rather than explicitly tracking individual speckles. Recent examples include the geometric-flow approach [21] and multimodal intrinsic speckle-tracking [37], which combines the geometric-flow formalism with a Fokker-Planck-type generalization. A recent publication by De Marco's [33] presents an enhanced implementation of UMPA, characterized by greatly improved computation efficiency, the capability of multithreading, and the reduction of estimation bias. As we implemented an older version of UMPA, the performance of this updated UMPA algorithm is unknown, but we believe that the general trend is similar to what we have presented in this paper. Finally, a machine learning method for speckle tracking with model validation on experimental data has been proposed lately [38]. While both algorithms utilized simulated random displacement maps for training and offered improved runtime and image quality, there were several distinct differences: (a) speckle patterns were generated with a coded binary mask versus our random sandpaper model; (b) they used a basic plane-wave model while we utilized a divergent-beam geometry for image propagation; (c) noise was calculated with either a random binary noise image or as Gaussian noise whereas we used Poisson noise; and, (d) their model was based on the SPINNet architecture while ours is adapted from the StrainNet-f architecture. We believe that our approach is a more realistic solution for speckle tracking as it utilizes more representative training and testing data from the sandpaper model and wave-optics simulation. On the other hand, further evaluations are needed to assess and compare the performances. Although CADE demonstrates improved speckle tracking performance and overcomes several of the issues of conventional algorithms, it still suffers from limitations. For example, CADE exhibits a slight decrease in accuracy for increasing displacements, which may be explained by the training data distribution (centered around zero and ranging from -1 to +1 pixel) due to the patch-based deformation method. However, as the intended application of this study focuses on the retrieval of subpixel displacements, CADE provides superior accuracy and spatial resolution compared to cross-correlation methods. The main limitation of our study is the lack of experimental image data for both training and validation purposes. Speckle patterns can be very well characterized by their statistical properties and thus simulation and numerical data can be reliably used to model a realistic diffuser setup. Although we were able to successfully validate CADE on a simulated geometric sample, training and testing with real image data with more complex samples would be ideal as this would produce the most realistic model but is unrealistic due to the large amount of data required by model training (i.e., currently requiring datasets of 25481 sets of images). In conclusion, this study successfully implemented and validated CADE, a windowless CNN-based speckle tracking method, which demonstrated superior performance, greatly decreased processing times, and robustness to noise. Furthermore, due to its ability to process much higher volumes of data with equal or better tracking accuracy, CADE brings the development and application of single-shot, low-dose XPCI to small-animal imaging one step further.
2310.15318
HetGPT: Harnessing the Power of Prompt Tuning in Pre-Trained Heterogeneous Graph Neural Networks
Graphs have emerged as a natural choice to represent and analyze the intricate patterns and rich information of the Web, enabling applications such as online page classification and social recommendation. The prevailing "pre-train, fine-tune" paradigm has been widely adopted in graph machine learning tasks, particularly in scenarios with limited labeled nodes. However, this approach often exhibits a misalignment between the training objectives of pretext tasks and those of downstream tasks. This gap can result in the "negative transfer" problem, wherein the knowledge gained from pre-training adversely affects performance in the downstream tasks. The surge in prompt-based learning within Natural Language Processing (NLP) suggests the potential of adapting a "pre-train, prompt" paradigm to graphs as an alternative. However, existing graph prompting techniques are tailored to homogeneous graphs, neglecting the inherent heterogeneity of Web graphs. To bridge this gap, we propose HetGPT, a general post-training prompting framework to improve the predictive performance of pre-trained heterogeneous graph neural networks (HGNNs). The key is the design of a novel prompting function that integrates a virtual class prompt and a heterogeneous feature prompt, with the aim to reformulate downstream tasks to mirror pretext tasks. Moreover, HetGPT introduces a multi-view neighborhood aggregation mechanism, capturing the complex neighborhood structure in heterogeneous graphs. Extensive experiments on three benchmark datasets demonstrate HetGPT's capability to enhance the performance of state-of-the-art HGNNs on semi-supervised node classification.
Yihong Ma, Ning Yan, Jiayu Li, Masood Mortazavi, Nitesh V. Chawla
2023-10-23T19:35:57Z
http://arxiv.org/abs/2310.15318v3
# HetGPT: Harnessing the Power of Prompt Tuning in Pre-Trained Heterogeneous Graph Neural Networks ###### Abstract. Graphs have emerged as a natural choice to represent and analyze the intricate patterns and rich information of the Web, enabling applications such as online page classification and social recommendation. The prevailing _"pre-train, fine-tune"_ paradigm has been widely adopted in graph machine learning tasks, particularly in scenarios with limited labeled nodes. However, this approach often exhibits a misalignment between the training objectives of pre-text tasks and those of downstream tasks. This gap can result in the "negative transfer" problem, wherein the knowledge gained from pre-training adversely affects performance in the downstream tasks. The surge in prompt-based learning within Natural Language Processing (NLP) suggests the potential of adapting a "_pre-train, prompt_" paradigm to graphs as an alternative. However, existing graph prompting techniques are tailored to homogeneous graphs, neglecting the inherent heterogeneity of Web graphs. To bridge this gap, we propose HetGPT, a general post-training prompting framework to improve the predictive performance of pre-trained heterogeneous graph neural networks (HGNNs). The key is the design of a novel prompting function that integrates a virtual class prompt and a heterogeneous feature prompt, with the aim to re-formulate downstream tasks to mirror pretext tasks. Moreover, HetGPT introduces a multi-view neighborhood aggregation mechanism, capturing the complex neighborhood structure in heterogeneous graphs. Extensive experiments on three benchmark datasets demonstrate HetGPT's capability to enhance the performance of state-of-the-art HGNNs on semi-supervised node classification. + Footnote †: [leftmargin=*] *Work done as an intern at Futurewei Technologies Inc. ## 1. Introduction The Web, an ever-expanding digital universe, has transformed into an unparalleled data warehouse. Within this intricate web of data, encompassing diverse entities and patterns, graphs have risen as an intuitive representation to encapsulate and examine the Web's multifaceted content, such as academic articles (Gordner et al., 2017), social media interactions (Gordner et al., 2017), chemical molecules (Gordner et al., 2017), and online grocery items (Gordner et al., 2017). In light of this, graph neural networks (GNNs) have emerged as the state of the art for graph representation learning, which enables a wide range of web-centric applications such as online page classification (Kolmogorov, 2017), social recommendation (Kolmogorov, 2017), pandemic trends forecasting (Kolmogorov, 2017), and dynamic link prediction (Kolmogorov, 2017; Kolmogorov, 2017). A primary challenge in traditional supervised graph machine learning is its heavy reliance on labeled data. Given the magnitude and complexity of the Web, obtaining annotations can be costly and often results in data of low quality. To address this limitation, the _"pre-train, fine-tune"_ paradigm has been widely adopted, where GNNs are initially pre-trained with some self-supervised pretext tasks and are then fine-tuned with labeled data for specific downstream tasks. Yet, this paradigm faces the following challenges: * **(C1)** Fine-tuning methods often overlook the inherent gap between the training objectives of the pretext and the downstream task. For example, while graph pre-training may utilize binary edge classification to draw topologically proximal node embeddings closer, the core of a downstream node classification task would be to ensure nodes with the same class cluster closely. Such misalignment makes the transferred node embeddings suboptimal for downstream tasks, _i.e._, negative transfer (Kolmogorov, 2017; Kolmogorov, 2017). The challenge arises: _how to reformulate the downstream node classification task to better align with the contrastive pretext task?_ * **(C2)** In semi-supervised node classification, there often exists a scarcity of labeled nodes. This limitation can cause fine-tuned networks to highly overfit these sparse (Kolmogorov, 2017) or potentially imbalanced (Kolmogorov, 2017) nodes, compromising their ability to generalize to new and unlabeled nodes. The challenge arises: _how to capture and generalize the intricate characteristics of each class in the embedding space to mitigate this overfitting?_ * **(C3)** Given the typically large scale of pre-trained GNNs, the attempt to recalibrate all their parameters during the fine-tuning phase can considerably slow down the rate of training convergence. The challenge arises: _how to introduce only a small number of trainable parameters in the fine-tuning stage while keeping the parameters of the pre-trained network unchanged?_ One potential solution that could partially address these challenges is to adapt the _"pre-train, prompt"_ paradigm from natural language processing (NLP) to the graph domain. In NLP, prompt-based learning has effectively generalized pre-trained language models across diverse tasks. For example, a sentiment classification task like _"The WebConf will take place in the scenic city of Singapore in 2024"_ can be reframed by appending a specific textual prompt "_I feel so_ [MASK]" to the end. It is highly likely that a language model pre-trained on next word prediction will predict "[MASK]" as "_excited_" instead of "_frustrated_", without necessitating extensive fine-tuning. With this methodology, certain downstream tasks can be seamlessly aligned with the pre-training objectives. While few prior work (Devlin et al., 2018; Wang et al., 2019; Wang et al., 2020; Wang et al., 2021; Wang et al., 2021) has delved into crafting various prompting templates for graphs, their emphasis remains strictly on homogeneous graphs. This narrow focus underscores the last challenge inherent to the heterogeneous graph structures typical of the Web: * (**C4**) Homogeneous graph prompting techniques typically rely on the pre-trained node embeddings of the target node or the aggregation of its immediate neighbors' embeddings for downstream node classification, which ignores the intricate neighborhood structure inherent to heterogeneous graphs. The challenge arises: _how to leverage the complex heterogeneous neighborhood structure of a node to yield more reliable classification decisions_? To comprehensively address all four aforementioned challenges, we propose HetGPT, a general post-training prompting framework tailored for heterogeneous graphs. Represented by the acronym Heterogeneous Graph Prompt Tuning, HetGPT serves as an auxiliary system for HGNNs that have undergone constrastive pre-training. At the core of HetGPT is a novel _graph prompting function_ that reformulates the downstream node classification task to align closely with the pretext contrastive task. We begin with the the _virtual class prompt_, which generalizes the intricate characteristics of each class in the embedding space. Then we introduce the _heterogeneous feature prompt_, which acts as a task-specific augmentation to the input graph. This prompt is injected into the feature space and the prompted node features are then passed through the pre-trained HGNN, with all parameters in a frozen state. Furthermore, a _multi-view neighborhood aggregation_ mechanism, that encapsulates the complexities of the heterogeneous neighborhood structure, is applied to the target node, generating a node token for classification. Finally, Pairwise similarity comparisons are performed between the node token and the class tokens derived from the virtual class prompt via the contrastive learning objectives established during pre-training, which effectively simulates the process of deriving a classification decision. In summary, our main contributions include: * To the best of our knowledge, this is the first attempt to adapt the "_pre-train_, _prompt_" paradigm to heterogeneous graphs. * We propose HetGPT, a general post-training prompting framework tailored for heterogeneous graphs. By coherently integrating a virtual class prompt, a heterogeneous feature prompt, and a multi-view neighborhood aggregation mechanism, it elegantly bridges the objective gap between pre-training and downstream tasks on heterogeneous graphs. * Extensive experiments on three benchmark datasets demonstrate HetGPT's capability to enhance the performance of state-of-the-art HGNNs on semi-supervised node classification. ## 2. Related Work **Heterogeneous graph neural networks.** Recently, there has been a surge in the development of heterogeneous graph neural networks (HGNNs) designed to learn node representations on heterogeneous graphs (Wang et al., 2019; Wang et al., 2020; Wang et al., 2021). For example, HAN (Wang et al., 2020) introduces hierarchical attention to learn the node-level and semantic-level structures. MAGNN (Chen et al., 2020) incorporates intermediate nodes along metapaths to encapsulate the rich semantic information inherent in heterogeneous graphs. HetGNN (Wang et al., 2020) employs random walk to sample node neighbors and utilizes LSTM to fuse heterogeneous features. HGT (Han et al., 2019) adopts a transformer-based architecture tailored for web-scale heterogeneous graphs. However, a shared challenge across these models is their dependency on high-quality labeled data for training. In real-world scenarios, obtaining such labeled data can be resource-intensive and sometimes impractical. This has triggered numerous studies to explore pre-training techniques for heterogeneous graphs as an alternative to traditional supervised learning. **Heterogeneous graph pre-training.** Pre-training techniques have gained significant attention in heterogeneous graph machine learning, especially under the scenario with limited labeled nodes (Wang et al., 2019; Wang et al., 2021). Heterogeneous graphs, with their complex types of nodes and edges, require specialized pre-training strategies. These can be broadly categorized into generative and contrastive methods. Generative learning in heterogeneous graphs primarily focuses on reconstructing masked segments of the input graph, either in terms of the underlying graph structures or specific node attributes (Chen et al., 2020; Wang et al., 2021; Wang et al., 2021). On the other hand, contrastive learning on heterogeneous graphs aims to refine node representations by magnifying the mutual information of positive pairs while diminishing that of negative pairs. Specifically, representations generated from the same data instance form a positive pair, while those from different instances constitute a negative pair. Some methods emphasizes contrasting node-level representations (Wang et al., 2019; Wang et al., 2021; Wang et al., 2021), while another direction contrasts node-level representations with graph-level representations (Wang et al., 2019; Wang et al., 2021; Wang et al., 2021). In general, the efficacy of contrastive methods surpasses that of generative ones (Wang et al., 2021), making them the default pre-training strategies adopted in this paper. **Prompt-based learning on graphs.** The recent trend in Natural Language Processing (NLP) has seen a shift from traditional fine-tuning of pre-trained language models (LMs) to a new paradigm: "_pre-train_, _prompt_" (Wang et al., 2021). Instead of fine-tuning LMs through task-specific objective functions, this paradigm reformulates downstream tasks to resemble pre-training tasks by incorporating textual prompts to input texts. This not only bridges the gap between pre-training and downstream tasks but also instigates further research integrating prompting with pre-trained graph neural networks (Wang et al., 2021). For example, GPPT (Wang et al., 2021) and GraphPrompt (Wang et al., 2021) introduce prompt templates to align the pretext task of link prediction with downstream classification. GPF (Chen et al., 2020) and VNT-GPPE (Wang et al., 2021) employ learnable perturbations to the input graph, modulating pre-trained node representations for downstream tasks. However, all these techniques cater exclusively to homogeneous graphs, overlooking the distinct complexities inherent to the heterogeneity in real-world systems. ## 3. Preliminaries **Definition 1: Heterogeneous graph.** A heterogeneous graph is defined as \(\mathcal{G}=\{\mathcal{V},\mathcal{E}\}\), where \(\mathcal{V}\) is the set of nodes and \(\mathcal{E}\) is the set of edges. It is associated with a node type mapping function \(\phi:\mathcal{V}\rightarrow\mathcal{A}\) and an edge type mapping function \(\phi:\mathcal{E}\rightarrow\mathcal{R}\). \(\mathcal{A}\) and \(\mathcal{R}\) denote the node type set and edge type set, respectively. For heterogeneous graphs, we require \(|\mathcal{A}|+|\mathcal{R}|>2\). Let \(\mathcal{X}=\{\mathcal{X}_{A}\mid A\in\mathcal{A}\}\) be the set of all node feature matrices for different node types. Specifically, \(\mathcal{X}_{A}\in\mathbb{R}^{|\mathcal{V}_{A}|\times d_{A}}\) is the feature matrix where each row corresponds to a feature vector \(\mathbf{x}_{i}^{A}\) of node \(i\) of type \(A\). All nodes of type \(A\) share the same feature dimension \(d_{A}\), and nodes of different types can have different feature dimensions. Figure 1(a) illustrates an example heterogeneous graph with three types of nodes: author (A), paper (P), and subject (S), as well as two types of edges: "write" and "belong to". Definition 2: Network schema.The network schema is defined as \(\mathcal{S}=(\mathcal{A},\mathcal{R})\), which can be seen as a meta template for a heterogeneous graph \(\mathcal{G}\). Specifically, network schema is a graph defined over the set of node types \(\mathcal{A}\), with edges representing relations from the set of edge types \(\mathcal{R}\). Figure 1(b) presents the network schema for a heterogeneous graph. As per the network schema, we learn that a paper is written by an author and that a paper belongs to a subject. Definition 3: Metapath.A metapath \(P\) is a path defined by a pattern of node and edge types, denoted as \(A_{1}\xrightarrow{R_{1}}A_{2}\xrightarrow{R_{2}}\ldots\xrightarrow{R_{J}}A_{ \mathcal{I}+1}\) (abbreviated as \(A_{1}A_{2}\cdots A_{\mathcal{I}+1}\)), where \(A_{i}\in\mathcal{A}\) and \(R_{i}\in\mathcal{R}\). Figure 1(c) shows two metapaths for a heterogeneous graph: "PAP" represents that two papers are written by the same author, while "PSP" indicates that two papers share the same subject. Definition 4: Semi-supervised node classification.Given a heterogeneous graph \(\mathcal{G}=\{\mathcal{V},\mathcal{E}\}\) with node features \(\mathcal{X}\), we aim to predict the labels of the target node set \(\mathcal{V}_{T}\) of type \(T\in\mathcal{A}\). Each target node \(v\in\mathcal{V}_{T}\) corresponds to a class label \(y_{v}\in\mathcal{Y}\). Under the semi-supervised learning setting, while the node labels in the labeled set \(\mathcal{V}_{L}\subset\mathcal{V}_{T}\) are provided, our objective is to predict the labels for nodes in the unlabeled set \(\mathcal{V}_{U}=\mathcal{V}_{T}\setminus\mathcal{V}_{L}\). Definition 5: Pre-train, fine-tune.We introduce the "_pre-train, fine-tune_" paradigm for heterogeneous graphs. During the pre-training stage, an encoder \(f_{\theta}\) parameterized by \(\theta\) maps each node \(v\in\mathcal{V}\) to a low-dimensional representation \(\mathbf{h}_{v}\in\mathbb{R}^{d}\). Typically, \(f_{\theta}\) is an HGNN that takes a heterogeneous graph \(\mathcal{G}=\{\mathcal{V},\mathcal{E}\}\) and its node features \(\mathcal{X}\) as inputs. For each target node \(v\in\mathcal{V}_{T}\), we construct its positive \(\mathcal{P}_{v}\) and negative sample sets \(\mathcal{N}_{v}\) for contrastive learning. The contrastive head \(g_{\psi}\), parameterized by \(\psi\), discriminates the representations between positive and negative pairs. The pre-training objective can be formulated as: \[\theta^{*},\psi^{*}=\operatorname*{arg\,min}_{\theta,\psi}\mathcal{L}_{con} \left(g_{\psi},f_{\theta},\mathcal{V}_{T},\mathcal{P},\mathcal{N}\right), \tag{1}\] where \(\mathcal{L}_{con}\) denotes the contrastive loss. Both \(\mathcal{P}=\{\mathcal{P}_{v}\mid v\in\mathcal{V}_{T}\}\) and \(\mathcal{N}=\{\mathcal{N}_{v}\mid v\in\mathcal{V}_{T}\}\) can be nodes or graphs. They may be direct augmentations or distinct views of the corresponding data instances, contingent on the contrastive learning techniques employed. In the fine-tuning stage, a prediction head \(h_{\eta}\), parameterized by \(\eta\), is employed to optimize the learned representations for the downstream node classification task. Given a set of labeled target nodes \(\mathcal{V}_{L}\) and their corresponding label set \(\mathcal{Y}\), the fine-tuning objective can be formulated as: \[\theta^{**},\eta^{*}=\operatorname*{arg\,min}_{\theta^{*},\eta}\mathcal{L}_{ sup}\left(h_{\eta},f_{\theta^{*}},\mathcal{V}_{L},\mathcal{Y}\right), \tag{2}\] where \(\mathcal{L}_{sup}\) is the supervised loss. Notably, the parameters \(\theta\) are initialized with those obtained from the pre-training stage, \(\theta^{*}\). ## 4. Method In this section, we introduce HetGPT, a novel graph prompting technique specifically designed for heterogeneous graphs, to address the four challenges outlined in Section 1. In particular, HetGPT consists of the following key components: (1) _prompting function design_; (2) _virtual class prompt_; (3) _heterogeneous feature prompt_; (4) _multi-view neighborhood aggregation_; (5) _prompt-based learning and inference_. The overall framework of HetGPT is shown in Figure 2. ### Prompting Function Design (C1) Traditional fine-tuning approaches typically append an additional prediction head and a supervised loss for downstream tasks, as depicted in Equation 2. In contrast, HetGPT pivots towards leveraging and tuning prompts specifically designed for node classification. In prompt-based learning for NLP, a prompting function employs a pre-defined template to modify the textual input, ensuring its alignment with the input format used during pre-training. Meanwhile, within graph-based pre-training, contrastive learning has overshadowed generative learning, especially in heterogeneous graphs (Han et al., 2017; Wang et al., 2018; Wang et al., 2019), as it offers broader applicability and harnesses overlapping task subspaces, which are optimal for knowledge transfer. Therefore, these findings motivate us to reformulate the downstream node classification task to align with contrastive approaches. Subsequently, a good design of graph prompting function becomes pivotal in matching these contrastive pre-training strategies. Central to graph contrastive learning is the endeavor to maximize mutual information between node-node or node-graph pairs. In light of this, we propose a graph prompting function, denoted as \(l(\cdot)\). This function transforms an input node \(v\) into a pairwise template that encompasses a node token \(\mathbf{z}_{v}\) and a class token \(\mathbf{q}_{c}\): \[l(v)=\left[\mathbf{z}_{v},\mathbf{q}_{c}\right]. \tag{3}\] Within the framework, \(\mathbf{q}_{c}\) represents a trainable embedding for class \(c\) in the downstream node classification task, as explained in Section 4.2. Concurrently, \(\mathbf{z}_{v}\) denotes the latent representation of node \(v\), derived from the pre-trained HGNN, which will be further discussed in Section 4.3 and Section 4.4. ### Virtual Class Prompt (C2) Instead of relying solely on direct class labels, we propose the concept of a virtual class prompt, a paradigm shift from traditional node classification. Serving as a dynamic proxy for each class, the prompt bridges the gap between the abstract representation of nodes and the concrete class labels they are affiliated with. By leveraging the virtual class prompt, we aim to reformulate downstream node classification as a series of mutual information calculation tasks, Figure 1. A example of a heterogeneous graph. thereby refining the granularity and adaptability of the classification predictions. This section delves into the design and intricacies of the virtual class prompt, illustrating how it can be seamlessly integrated into the broader contrastive pre-training framework. #### 4.2.1. Class tokens We introduce class tokens, the building blocks of the virtual class prompt, which serve as representative symbols for each specific class. Distinct from discrete class labels, these tokens can capture intricate class-specific semantics, providing a richer context for node classification. We formally define the set of class tokens, denoted as \(\mathcal{Q}\), as follows: \[\mathcal{Q}=\{\mathbf{q}_{1},\mathbf{q}_{2},\ldots,\mathbf{q}_{C}\}\,, \tag{4}\] where \(C\) is the total number of classes in \(\mathcal{Y}\). Each token \(\mathbf{q}_{c}\in\mathbb{R}^{d}\) is a trainable vector and shares the same embedding dimension \(d\) with the node representations from the pre-trained network \(f_{\theta^{*}}\). #### 4.2.2. Prompt initialization Effective initialization of class tokens facilitates a smooth knowledge transfer from pre-trained heterogeneous graphs to the downstream node classification. We initialize each class token, \(\mathbf{q}_{c}\), by computing the mean of embeddings for labeled nodes that belong to the respective class. Formally, \[\mathbf{q}_{c}=\frac{1}{N_{c}}\sum_{\begin{subarray}{c}\mathbf{p}\in\mathcal{V}_{ \mathcal{I}}\\ y_{\theta}=c\end{subarray}}\mathbf{h}_{o},\quad\forall c\in\{1,2,\ldots,C\}, \tag{5}\] where \(N_{c}\) denotes the number of nodes with class \(c\) in the labeled set \(\mathcal{V}_{\mathcal{I}}\), and \(\mathbf{h}_{o}\) represents the pre-trained embedding of node \(v\). This initialization aligns each class token with the prevalent patterns of its respective class, enabling efficient prompt tuning afterward. ### Heterogeneous Feature Prompt (C3) Inspired by recent progress with visual prompts in the vision domain (Bengio et al., 2018; Chen et al., 2019), we propose a heterogeneous feature prompt. This approach incorporates a small amount of trainable parameters directly into the feature space of the heterogeneous graph \(\mathcal{G}\). Throughout the training phase of the downstream task, the parameters of the pre-trained network \(f_{\theta^{*}}\) remain unchanged. The key insight behind this feature prompt lies in its ability to act as task-specific augmentations to the original graph. It implicitly tailors the pre-trained node representations for an effective and efficient transfer of the learned knowledge from pre-training to the downstream task. Prompting techniques fundamentally revolve around the idea of augmenting the input data to better align with the pretext objectives. This makes the design of a graph-level transformation an important factor for the efficacy of prompting. To illustrate, let's consider a homogeneous graph \(\mathcal{G}\) with its adjacency matrix \(\mathbf{A}\) and node feature matrix \(\mathbf{X}\). We introduce \(t_{\xi}\), a graph-level transformation function parameterized by \(\xi\), such as changing node features, adding or removing edges, _etc_. Prior research (Chen et al., 2018; Wang et al., 2019) has proved that for any transformation function \(t_{\xi}\), there always exists a corresponding feature prompt \(\mathbf{p}^{*}\) that satisfies the following property: \[f_{\theta^{*}}(\mathbf{A},\mathbf{X}+\mathbf{p}^{*})\equiv f_{\theta^{*}}(t_{\xi}(\mathbf{A}, \mathbf{X}))+O_{\rho\theta}, \tag{6}\] where \(O_{\rho\theta}\) represents the deviation between the node representations from the graph that's augmented by \(t_{\xi}\) and the graph that's prompted by \(\mathbf{p}^{*}\). This discrepancy is primarily contingent on the quality of the learned prompt \(\mathbf{p}^{*}\) as the parameters \(\theta^{*}\) of the pre-trained model are fixed. This perspective further implies the feasibility and significance of crafting an effective feature prompt within Figure 2. Overview of the HetGPT architecture: Initially, an HGNN is pre-trained alongside a contrastive head using a contrastive learning objective, after which their parameters are frozen. Following this, a _heterogeneous feature prompt_ (Sec. 4.3) is injected into the input graph’s feature space. These prompted node features are then processed by the pre-trained HGNN, producing the prompted node embeddings. Next, a _multi-view neighborhood aggregation_ mechanism (Sec. 4.4) captures both local and global heterogeneous neighborhood information of the target node, generating a node token. Finally, pairwise similarity comparisons are performed between this node token and class tokens derived from the _virtual class prompt_ (Sec. 4.2) via the same contrastive learning objective from pre-training. As an illustrative example of employing HetGPT for node classification: consider a target node \(P_{2}\) associated with class \(1\), its positive samples during prompt tuning are constructed using the class token of class \(1\), while negative samples are drawn from class tokens of classes \(2\) and \(3\) (_i.e., all remaining classes_). the graph's input space, which emulates the impact of learning a specialized augmentation function tailored for downstream tasks. However, in heterogeneous graphs, nodes exhibit diverse attributes based on their types, and each type has unique dimensionalities and underlying semantic meanings. Take a citation network for instance: while paper nodes have features represented by word embeddings derived from their abstracts, author nodes utilize one-shot encoding as features. Given this heterogeneity, the approach used in homogeneous graph prompting methods may not be effective or yield optimal results when applied to heterogeneous graphs, as it uniformly augments node features for all node types via a single and all-encompassing feature prompt. #### 4.3.1. Type-specific feature tokens To address the above challenge, we introduce type-specific feature tokens, which are a set of designated tokens that align with the diverse input features inherent to each node type. Given the diversity in scales and structures across various graphs, equating the number of feature tokens to the node count is often sub-optimal. This inefficiency is especially obvious in large-scale graphs, as this design demands extensive storage due to its \(O(|\mathcal{V}|)\) learnable parameters. In light of this, for each node type, we employ a feature prompt consisting of a limited set of independent basis vectors of size \(K\), _i.e._, \(f_{k}^{A}\in\mathbb{R}^{d_{A}}\), with \(d_{A}\) as the feature dimension associated with node type \(A\in\mathcal{A}\): \[\mathcal{F}=\{\mathcal{F}_{A}\mid A\in\mathcal{A}\},\quad\quad\quad\mathcal{F} _{A}=\left\{f_{1}^{A},f_{2}^{A},\ldots,f_{K}^{A}\right\}, \tag{7}\] where \(K\) is a hyperparameter and its value can be adjusted based on the specific dataset in use. #### 4.3.2. Prompted node features For each node \(i\) of type \(A\in\mathcal{A}\), its node feature vector \(\mathbf{x}_{i}^{A}\) is augmented by a linear combination of feature token \(f_{k}^{A}\) through an attention mechanism, where the attention weights are denoted by \(w_{i,k}^{A}\). Consequently, the prompted node feature vector evolves as: \[\tilde{\mathbf{x}}_{i}^{A}=\mathbf{x}_{i}^{A}+\sum_{k=1}^{K}w_{i,k}^{A} \cdot f_{k}^{A}, \tag{9}\] \[w_{i,k}^{A}=\frac{\exp\left(\sigma\left((f_{k}^{A})^{\top}\cdot \mathbf{x}_{i}^{A}\right)\right)}{\sum_{j=1}^{K}\exp\left(\sigma\left((f_{j}^{A}) ^{\top}\cdot\mathbf{x}_{i}^{A}\right)\right)}, \tag{8}\] where \(\sigma(\cdot)\) represents a non-linear activation function. Subsequently, we utilize these prompted node features, represented as \(\tilde{\mathcal{X}}\), together with the heterogeneous graph, \(\mathcal{G}\). They are then passed through the pre-trained HGNN \(f_{0}\). during the prompt tuning phase to obtain a prompted node embedding matrix \(\tilde{\mathbf{H}}\): \[\tilde{\mathbf{H}}=f_{0^{\star}}(\mathcal{G},\tilde{\mathcal{X}})\in\mathbb{R}^{| \mathcal{V}|\times d}. \tag{10}\] ### Multi-View Neighborhood Aggregation (C4) In prompt-based learning for homogeneous graphs, the node token \(\mathbf{z}_{o}\) in Equation 3 for a given node \(v\in\mathcal{V}\) is directly equated to \(\mathbf{h}_{o}\), which is the embedding generated by the pre-trained network \(f_{0^{\star}}\)(Wang et al., 2017). Alternatively, it can also be derived from an aggregation of the embeddings of its immediate neighboring nodes (Wang et al., 2017). However, in heterogeneous graphs, such aggregations are complicated due to the inherent heterogeneity of neighboring structures. For example, given a target node with the type "paper", connections can be established either with other "paper" nodes through different metapaths (_e.g._, PAP, PSP) or with nodes of varied types (_i.e._, author or subject) based on the network schema. Furthermore, it is also vital to leverage the prompted pre-trained node embeddings \(\tilde{\mathbf{H}}\) (as detailed in Section 4.3) in the aggregation. Taking all these into consideration, we introduce a multi-view neighborhood aggregation mechanism. This strategy incorporates both type-based and metapath-based neighbors, ensuring a comprehensive representation that captures both local (_i.e._, network schema) and global (_i.e._, metapath) patterns. #### 4.4.1. Type-based aggregation Based on the network schema outlined in Definition 2, a target node \(i\in\mathcal{V}_{T}\) can directly connect to \(M\) different node types \(\{A_{1},A_{2},\ldots,A_{M}\}\). Given the variability in contributions from different nodes of the same type to node \(i\) and the diverse influence from various types of neighbors, we utilize a two-level attention mechanism (Wang et al., 2017) to aggregate the local information of node \(i\). For the first level, the information \(\mathbf{h}_{i}^{A_{m}}\) is fused from the neighbor set \(\mathcal{N}_{i}^{A_{m}}\) for node \(i\) using node attention: \[\mathbf{h}_{i}^{A_{m}}=\sigma\left(\sum_{j\in\mathcal{N}_{i}^{A_{m}} \cup\{i\}}\alpha_{i,j}^{A_{m}}\cdot\tilde{\mathbf{h}}_{j}\right), \tag{12}\] \[\alpha_{i,j}^{A_{m}}=\frac{\exp\left(\sigma\left(\mathbf{a}_{A_{m}} ^{\top}\cdot[\tilde{\mathbf{h}}_{i}||\tilde{\mathbf{h}}_{j}]\right)\right)}{\sum_{k\in \mathcal{N}_{i}^{A_{m}}\cup\{i\}}\exp\left(\sigma\left(\mathbf{a}_{A_{m}}^{\top} \cdot[\tilde{\mathbf{h}}_{i}||\tilde{\mathbf{h}}_{k}]\right)\right)}, \tag{11}\] where \(\sigma(\cdot)\) is a non-linear activation function, \(||\) denotes concatenation, and \(\mathbf{a}_{A_{m}}\in\mathbb{R}^{2d\times 1}\) is the node attention vector shared across all nodes of type \(A_{m}\). For the second level, the type-based embedding of node \(i\), denoted as \(\mathbf{z}_{i}^{\text{TP}}\), is derived by synthesizing all type representations \(\{\mathbf{h}_{i}^{A_{1}},\mathbf{h}_{i}^{A_{2}},\ldots,\mathbf{h}_{i}^{A_{M}}\}\) through semantic attention: \[\mathbf{z}_{i}^{\text{TP}}=\sum_{i=1}^{M}\beta_{A_{m}}\cdot\mathbf{h}_{i}^ {A_{m}},\quad\beta_{A_{m}}=\frac{\exp(w_{A_{m}})}{\sum_{k=1}^{M}\exp(w_{A_{k}})}, \tag{14}\] \[w_{A_{m}}=\frac{1}{|\mathcal{V}_{T}|}\sum_{i\in\mathcal{V}_{T} }\mathbf{a}_{\text{TP}}^{\top}\cdot\tanh(\mathbf{W}_{\text{TP}}\cdot\mathbf{h}_{i}^{A_{m} }+\mathbf{b}_{\text{TP}}), \tag{13}\] where \(\mathbf{a}_{\text{TP}}\in\mathbb{R}^{d\times 1}\) is the type-based semantic attention vector shared across all node types, \(\mathbf{W}_{\text{TP}}\in\mathbb{R}^{d\times d}\) is the weight matrix, and \(\mathbf{b}_{\text{TP}}\in\mathbb{R}^{d\times 1}\) is the bias vector. #### 4.4.2. Metapath-based aggregation In contrast to type-based aggregation, metapath-based aggregation provides a perspective to capture global information of a target node \(i\in\mathcal{V}_{T}\). This is attributed to the nature of metapaths, which encompass connections that are at least two hops away. Given a set of defined metapaths \(\{P_{1},P_{2},\ldots,P_{N}\}\), the information from neighbors of node \(i\) connected through metapath \(P_{n}\) is aggregated via node attention: \[\mathbf{h}_{i}^{P_{n}}=\sigma\left(\sum_{j\in\mathcal{N}_{i}^{P_{n}} \cup\{i\}}\alpha_{i,j}^{P_{n}}\cdot\tilde{\mathbf{h}}_{i}\right), \tag{16}\] \[\alpha_{i,j}^{P_{n}}=\frac{\exp\left(\sigma\left(\mathbf{a}_{P_{n}}^{ \top}\cdot[\tilde{\mathbf{h}}_{i}||\tilde{\mathbf{h}}_{j}]\right)\right)}{\sum_{k\in \mathcal{N}_{i}^{P_{n}}\cup\{i\}}\exp\left(\sigma\left(\mathbf{a}_{P_{n}}^{\top} \cdot[\tilde{\mathbf{h}}_{i}||\tilde{\mathbf{h}}_{k}]\right)\right)}, \tag{15}\] where \(\mathbf{a}_{P_{n}}\in\mathbb{R}^{2d\times 1}\) is the node attention vector shared across all nodes connected through metapath \(P_{n}\). To compile the global structural information from various metapaths, we fuse the node embeddings \(\{\mathbf{h}_{i}^{P_{n}},\mathbf{h}_{i}^{P_{2}},\ldots,\mathbf{h}_{i}^{P_{N}}\}\) derived from each metapath into a single embedding using semantic attention: \[\mathbf{z}_{i}^{\text{MP}}=\sum_{i=1}^{N}\beta_{P_{n}}\cdot\mathbf{h}_{i}^ {P_{n}},\quad\beta_{P_{n}}=\frac{\exp(w_{P_{n}})}{\sum_{k=1}^{N}\exp(w_{P_{k}})}, \tag{18}\] \[w_{P_{n}}=\frac{1}{|\mathcal{V}_{T}|}\sum_{i\in\mathcal{V}_{T}} \mathbf{a}_{\text{MP}}^{\top}\cdot\tanh(\mathbf{W}_{\text{MP}}\cdot\mathbf{h}_{i}^{P_{ n}}+\mathbf{b}_{\text{MP}}), \tag{17}\] where \(\mathbf{a}_{\text{MP}}\in\mathbb{R}^{d\times 1}\) is the metapath-based semantic- attention vector shared across all metapaths, \(\mathbf{W}_{\text{MP}}\in\mathbb{R}^{d\times d}\) is the weight matrix, and \(\mathbf{b}_{\text{MP}}\in\mathbb{R}^{d\times 1}\) is the bias vector. Integrating the information from both aggregation views, we obtain the final node token, \(\mathbf{z}_{i}\), by concatenating the type-based and the metapath-based embedding: \[\mathbf{z}_{i}=\sigma\left(\mathbf{W}[\mathbf{z}_{i}^{\text{MP}}\|\mathbf{z}_{i}^{\text{TP}}] +\mathbf{b}\right), \tag{19}\] where \(\sigma(\cdot)\) is a non-linear activation function, \(\mathbf{W}\in\mathbb{R}^{2d\times d}\) is the weight matrix, and \(\mathbf{b}\in\mathbb{R}^{d\times 1}\) is the bias vector. ### Prompt-Based Learning and Inference Building upon our prompt design detailed in the preceding sections, we present a comprehensive overview of the prompt-based learning and inference process for semi-supervised node classification. This methodology encompasses three primary stages: (1) _prompt addition_, (2) _prompt tuning_, and (3) _prompt-assisted prediction_. #### 4.5.1. Prompt addition Based on the graph prompting function \(I(\cdot)\) outlined in Equation (3), we parameterize it using the trainable virtual class prompt \(\mathcal{Q}\) and the heterogeneous feature prompt \(\mathcal{F}\). To ensure compatibility during the contrastive loss calculation, which we detail later, we use a single-layer Multilayer Perceptron (MLP) to project both \(\mathbf{z}_{0}\) and \(\mathbf{q}_{c}\), onto the same embedding space. Formally: \[\mathbf{z}_{v}^{\prime}=\text{MLP}(\mathbf{z}_{v}),\quad\quad\mathbf{q}_{c}^{\prime}= \text{MLP}(\mathbf{q}_{c}),\quad\quad l_{\mathcal{Q},\mathcal{F}}(v)=[\mathbf{z}_{v}^{ \prime},\mathbf{q}_{c}^{\prime}]. \tag{20}\] #### 4.5.2. Prompt tuning Our prompt design allows us to reuse the contrastive head from Equation 1 for downstream node classification without introducing a new prediction head. Thus, the original positive \(\mathcal{P}_{p}\) and negative samples \(\mathcal{N}_{0}\) of a labeled node \(v\in\mathcal{V}_{L}\) used during pre-training are replaced with the virtual class prompt corresponding to its given class label \(y_{v}\). \[\mathcal{P}_{v}=\left\{\mathbf{q}_{y_{v}}\right\},\quad\quad\quad\quad\quad \mathcal{N}_{0}=\mathcal{Q}\setminus\left\{\mathbf{q}_{y_{v}}\right\}, \tag{21}\] Consistent with the contrastive pre-training phase, we employ the InfoNCE (Liu et al., 2019) loss to replace the supervised classification loss \(\mathcal{L}_{\text{sup}}\): \[\mathcal{L}_{\text{con}}=-\sum_{v\in\mathcal{V}_{L}}\log\left(\frac{\exp(\text {sim}(\mathbf{z}_{v}^{\prime},\mathbf{q}_{y_{v}}^{\prime})/\tau)}{\sum_{c=1}^{C}\exp( \text{sim}(\mathbf{z}_{v}^{\prime},\mathbf{q}_{c}^{\prime})/\tau)}\right). \tag{22}\] Here, \(\text{sim}(\cdot)\) denotes a similarity function between two vectors, and \(\tau\) denotes a temperature hyperparameter. To obtain the optimal prompts, we utilize the following prompt tuning objective: \[\mathcal{Q}^{*},\mathcal{F}^{*}=\operatorname*{arg\,min}_{\mathcal{Q},\mathcal{ F}}\mathcal{L}_{\text{con}}\left(\mathbf{g}_{\psi^{*}},\mathbf{f}_{\theta^{*}},\mathbf{l}_{ \mathcal{Q},\mathcal{F}},\mathcal{V}_{L}\right)+\lambda\mathcal{L}_{\text{orth}}, \tag{23}\] where \(\lambda\) is a regularization hyperparameter. The orthogonal regularization (Bang et al., 2019) loss \(\mathcal{L}_{\text{orth}}\) is defined to ensure the label tokens in the virtual class prompt remain orthogonal during prompt tuning, fostering diversified representations of different classes: \[\mathcal{L}_{orth}=\left\|\mathbf{QQ}^{\top}-\mathbf{I}\right\|_{F}^{2}, \tag{24}\] where \(\mathbf{Q}=[\mathbf{q}_{1},\mathbf{q}_{2},\ldots,\mathbf{q}_{C}]^{\top}\in\mathbb{R}^{C \times d}\) is the matrix form of the virtual class prompt \(\mathcal{Q}\), and \(\mathbf{I}\in\mathbb{R}^{C\times C}\) is an identity matrix. #### 4.5.3. Prompt-assisted prediction During the inference phase, for an unlabeled target node \(v\in\mathcal{V}_{U}\), the predicted probability of node \(v\) belonging to class \(c\) is given by: \[P(y_{v}=c)=\frac{\exp(\text{sim}(\mathbf{z}_{v}^{\prime},\mathbf{q}_{c}^{\prime}))}{ \sum_{k=1}^{C}\exp(\text{sim}(\mathbf{z}_{v}^{\prime},\mathbf{q}_{k}^{\prime}))}. \tag{25}\] This equation computes the similarity between the projected node token \(\mathbf{z}_{v}^{\prime}\) and each projected class token \(\mathbf{q}_{c}^{\prime}\), using the softmax function to obtain class probabilities. The class with the maximum likelihood for node \(v\) is designated as the predicted class \(\hat{y}_{v}\): \[\hat{y}_{v}=\operatorname*{arg\,max}_{c}P(y_{v}=c), \tag{26}\] ## 5. Experiments In this section, we conduct a thorough evaluation of our proposed HetGPT to address the following research questions: * **(RQ1)** Can HetGPT improve the performance of pre-trained heterogeneous graph neural networks on the semi-supervised node classification task? * **(RQ2)** How does HetGPT perform under different settings, _i.e._, ablated models and hyperparameters? * **(RQ3)** How does the prompt tuning efficiency of HetGPT compare to its fine-tuning counterpart? * **(RQ4)** How interpretable is the learned prompt in HetGPT? ### Experiment Settings #### 5.1.1. Datasets We evaluate our methods using three benchmark datasets: ACM (Yang et al., 2019), DBLP (Chen et al., 2019), and IMDB (Chen et al., 2019). Detailed statistics and descriptions of these datasets can be found in Table 1. For the semi-supervised node classification task, we randomly select 1, 5, 20, 40, or 60 labeled nodes per class as our training set. Additionally, we set aside 1,000 nodes for validation and another 1,000 nodes for testing. Our evaluation metrics include Macro-F1 and Micro-F1. #### 5.1.2. Baseline models We compare our approach against methods belonging to three different categories: * **Supervised HGNNs:** HAN (Liu et al., 2019), HGT (Chen et al., 2019), MAGNN (Chen et al., 2019); \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline **Dataset** & **\# Nodes** & **\# Edges** & **Metapaths** & **\# Classes** \\ \hline \multirow{3}{*}{ACM} & Paper: 4,019 & \multirow{3}{*}{P-A: 13,407} & \multirow{3}{*}{PSP} & \multirow{3}{*}{3} \\ & Author: 7,167 & & \multirow{3}{*}{P-S: 4,019} & \multirow{3}{*}{APCA} \\ & Subject: 60 & & & \\ \hline \multirow{3}{*}{DBLP} & Author: 4,057 & \multirow{3}{*}{P-A: 19,645} & \multirow{3}{*}{APA} & \multirow{3}{*}{4} \\ & Paper: 14,328 & & \multirow{3}{*}{P-T: 85,810} & \multirow{3}{*}{APCPA} \\ & Term: 7,723 & & \multirow{3}{*}{P-C: 14,328} & \multirow{3}{*}{APTPA} \\ & Conference: 20 & & & \\ \hline \multirow{3}{*}{IMDB} & Movie: 4,278 & \multirow{3}{*}{M-D: 4,278} & \multirow{3}{*}{MAM} \\ & Director: 2,081 & & \multirow{3}{*}{M-A: 12,828} & \multirow{3}{*}{MDM} \\ & Actor: 5,257 & & & \\ \hline \hline \end{tabular} \end{table} Table 1. Detailed statistics of the benchmark datasets. Underlined node types are the target nodes for classification. * **HGNNs with "pre-train, fine-tune"**: * **Generative**: HGMAE [(30)]; * **Contrastive (our focus)**: DMGI [(24)],HeCo [(37)],HDMI [(15)]; * **GNNs with"pre-train, prompt"**: GPPT [(27)]. #### 5.1.3. Implementation details For the homogeneous method GPPT, we evaluate using all the metapaths and present the results with the best performance. Regarding the parameters of other baselines, we adhere to the configuration specified in their original papers. In our HetGPT model, the heterogeneous feature prompt is initialized using Kaiming initialization [(9)]. During the prompt tuning phase, we employ the Adam optimizer [(16)] and search within a learning rate ranging from 1e-4 to 5e-3. We also tune the patience for early stopping from 20 to 100. The regularization hyperparameter \(\lambda\) is set to 0.01. We experiment with the number of feature tokens \(K\), searching values from {1, 5, 10, 15, 20}. Lastly, for our non-linear activation function \(\sigma(\cdot)\), we use LeakyReLU. ### Performance on Node Classification (RQ1) Experiment results for semi-supervised node classification on three benchmark datasets are detailed in Table 2. Compared to the pre-trained DMGI, HeCo, and HDMI models, our post-training prompting framework, HetGPT, exhibits superior performance in 88 out of the 90 comparison pairs. Specifically, we observe a relative improvement of 3.00% in Macro-F1 and 2.62% in Micro-F1. The standard deviation of HetGPT aligns closely with that of the original models, indicating that the improvement achieved is both substantial and robust. It's crucial to note that the three HGNNs with _"pre-train, fine-tune"_ - DMGI, HeCo, and HDMI, are already among the state-of-the-art methods for semi-supervised node classification. By integrating them with HetGPT, we push the envelope even further, setting a new performance pinnacle. Furthermore, HetGPT's edge becomes even more significant in scenarios where labeled nodes are extremely scarce, achieving an improvement of 6.60% in Macro-F1 and 6.88% in Micro-F1 under the 1-shot setting. Such marked improvements in few-shot performance strongly suggest HetGPT's efficacy in mitigating the overfitting issue. The strategic design of our prompting function, especially the virtual class prompt, effectively captures the intricate characteristics of each class, which can potentially obviate the reliance on costly annotated data. Additionally, GPPT lags considerably on all datasets, which further underscores the value of HetGPT's effort in tackling the unique challenges inherent to heterogeneous graphs. ### Performance under Different Settings (RQ2) #### 5.3.1. Ablation study To further demonstrate the effectiveness of each module in HetGPT, we conduct an ablation study to evaluate our full framework against the following three variants: * **w/o VCP**: the variant of HetGPT without the virtual class prompt from Section 4.2; * **w/o HFP**: the variant of HetGPT without the heterogeneous feature prompt from Section 4.3; \begin{table} \begin{tabular}{c|c|c|c c c c|c c c|c c c} \hline \hline Dataset & Metric & \# Train & HAN & HGT & MAGNN & HGMAE & GPPT & DMGI & **+HetGPT** & HeCo & **+HetGPT** & HDMI & **+HetGPT** \\ \hline \multirow{8}{*}{ACM} & \multirow{8}{*}{Ma-F1} & 1 & 27.08\(\pm\)4.09 & 49.74\(\pm\)3.88 & 36.24\(\pm\)2.38 & 28.00\(\pm\)7.12 & 21.85\(\pm\)4.09 & 47.28\(\pm\)5.12 & 52.07\(\pm\)3.82 & 54.24\(\pm\)4.04 & 55.90\(\pm\)4.04 & 65.58\(\pm\)5.47 & **71.00\(\pm\)**3.32 \\ & & 5 & 84.84\(\pm\)6.93 & 84.40\(\pm\)4.98 & 84.45\(\pm\)4.79 & 87.34\(\pm\)1.42 & 71.77\(\pm\)8.61 & 86.24\(\pm\)6.97 & 87.91\(\pm\)7.04 & 86.55\(\pm\)5.36 & 87.03\(\pm\)1.11 & 88.88\(\pm\)1.73 & **91.08\(\pm\)**3.37 \\ & & 20 & 84.37\(\pm\)3.15 & 84.03\(\pm\)5.13 & 85.13\(\pm\)1.62 & 88.09\(\pm\)8.04 & 86.64\(\pm\)4.05 & 88.65\(\pm\)5.04 & 88.09\(\pm\)1.21 & 88.63\(\pm\)5.03 & 90.67\(\pm\)7.92 & **92.15\(\pm\)**3.03 \\ & & 40 & 86.33\(\pm\)4.86 & 86.74\(\pm\)4.83 & 85.12\(\pm\)1.81 & 81.18\(\pm\)5.71 & 87.52\(\pm\)4.78 & 87.58\(\pm\)4.04 & 87.03\(\pm\)8.04 & 86.86\(\pm\)5.04 & 90.62\(\pm\)5.21 & **91.31\(\pm\)**3.99 \\ & & 60 & 86.31\(\pm\)4.16 & 86.56\(\pm\)4.56 & 86.56\(\pm\)5.15 & 88.81\(\pm\)1.07 & 84.15\(\pm\)5.08 & 87.14\(\pm\)5.09 & 90.35\(\pm\)4.04 & 88.95\(\pm\)4.05 & 91.34\(\pm\)5.9 & 91.29\(\pm\)**3.95 \\ \cline{2-13} & & 1 & 49.76\(\pm\)4.58 & 58.52\(\pm\)4.53 & 51.27\(\pm\)4.05 & 40.82\(\pm\)2.38 & 34.23\(\pm\)2.19 & 49.63\(\pm\)5.24 & 52.49\(\pm\)4.09 & 54.81\(\pm\)4.08 & 65.01\(\pm\)4.08 & 64.89\(\pm\)4.02 & **73.41\(\pm\)**3.51 \\ & & 5 & 84.96\(\pm\)1.14 & 85.31\(\pm\)4.14 & 85.31\(\pm\)4.73 & 74.73\(\pm\)4.31 & 85.16\(\pm\)4.08 & 88.05\(\pm\)4.07 & 86.85\(\pm\)5.13 & 87.26\(\pm\)5.09 & 89.01\(\pm\)4.07 & **91.09\(\pm\)**3.07 \\ & & 20 & 83.33\(\pm\)5.05 & 83.85\(\pm\)4.05 & 83.88\(\pm\)1.09 & 88.11\(\pm\)5.11 & 82.00\(\pm\)8.54 & 85.94\(\pm\)4.08 & 87.87\(\pm\)3.81 & 88.60\(\pm\)5.09 & 90.53\(\pm\)**9.05 & **91.05\(\pm\)**9.05 \\ & & 40 & 86.24\(\pm\)4.04 & 86.21\(\pm\)4.86 & 86.39\(\pm\)4.98 & 88.29\(\pm\)1.88 & 82.02\(\pm\)1.87 & 80.97\(\pm\)5.78 & 86.56\(\pm\)5.15 & 86.64\(\pm\)5.15 & 89.04\(\pm\)4.12 & **91.11\(\pm\)**3.99 \\ & & 60 & 85.56\(\pm\)4.84 & 85.49\(\pm\)4.58 & 86.03\(\pm\)4.80 & 88.59\(\pm\)4.17 & 84.16\(\pm\)6.08 & 83.44\(\pm\)6.09 & 90.13\(\pm\)0.30 & 88.48\(\pm\)6.04 & 88.91\(\pm\)6.05 & 91.16\(\pm\)6.05 & **91.94\(\pm\)**3.93 \\ \hline \multirow{8}{*}{DBLP} & \multirow{8}{*}{Ma-F1} & 1 & 50.28\(\pm\)4.84 & 70.86\(\pm\)4.82 & 52.52\(\pm\)4.87 & 82.75\(\pm\)5.39 & 39.17\(\pm\)7.12 & 76.00\(\pm\)3.21 & 83.33\(\pm\)1.90 & 88.79\(\pm\)4.04 & 89.44\(\pm\)4.54 & 88.28\(\pm\)5.39 & **90.25\(\pm\)**9.09 \\ & & 5 & 82.85\(\pm\)4.88 & 82.70\(\pm\)3.03 & 82.24\(\pm\)3.85 & 83.47\(\pm\)4.05 & 54.13\(\pm\)1.81 & 81.12\(\pm\)5.18 & 81.55\(\pm\)1.91 & 91.65\(\pm\)5.03 & **91.87\(\pm\)**7.01 & 91.00\(\pm\)**9.39 \\ & & 20 & 89.91\(\pm\)4.81 & 89.61\(\pm\)5.12 & 89.36\(\pm\)5.88 & 89.81\(\pm\)5.71 & 71.06\(\pm\)5.03 & 84.03\(\pm\)5.41 & 89.03\(\pm\)5.41 & 89.90\(\pm\)5.17 & 91.00\(\pm\)**5.14 \\ & & 40 & 89.25\(\pm\)5.48 & 89.59\( * **w/o MNA:** the variant of HetGPT without the multi-view neighborhood aggregation from Section 4.4. Experiment results on ACM and DBLP, shown in Figure 3, highlight the substantial contributions of each module to the overall effectiveness of HetGPT. Notably, the virtual class prompt emerges as the most pivotal component, indicated by the significant performance drop when it's absent. This degradation mainly stems from the overfitting issue linked to the negative transfer problem, especially when labeled nodes are sparse. The virtual class prompt directly addresses this issue by generalizing the intricate characteristics of each class within the embedding space. #### 5.3.2. Hyper-parameter sensitivity We evaluate the sensitivity of HetGPT to its primary hyperparameter: the number of basis feature tokens \(K\) in Equation (7). As depicted in Figure 4, even a really small value of \(K\) (_i.e._, 5 for ACM, 20 for DBLP, and 5 for IMDB) can lead to satisfactory node classification performance. This suggests that the prompt tuning effectively optimizes performance without the need to introduce an extensive number of new parameters. ### Prompt Tuning Efficiency Analysis (RQ3) Our HetGPT, encompassing the virtual class prompt and the heterogeneous feature prompt, adds only a few new trainable parameters (_i.e._, comparable to a shallow MLP). Concurrently, the parameters of the pre-trained HGNNs and the contrastive head remain unchanged during the entire prompt tuning phase. Figure 5 illustrates that HetGPT converges notably faster than its traditional "_pre-train_, _fine-tune_" counterpart, both recalibrating the parameters of the pre-trained HGNNs and introducing a new prediction head. This further demonstrates the efficiency benefits of our proposed framework, allowing for effective training with minimal tuning iterations. ### Interpretability Analysis (RQ4) To gain a clear understanding of how the design of the virtual class prompt facilitates effective node classification without relying on the traditional classification paradigm, we employ a t-SNE plot to visualize the node representations and the learned virtual class prompt on ACM and DBLP, as shown in Figure 6. Within this visualization, nodes are depicted as colored circles, while the class tokens from the learned virtual class prompt are denoted by colored stars. Each color represents a unique class label. Notably, the embeddings of these class tokens are positioned in close vicinity to clusters of node embeddings sharing the same class label. This immediate spatial proximity between a node and its respective class token validates the efficacy of similarity measures inherited from the contrastive pretext for the downstream node classification task. This observation further reinforces the rationale behind our node classification approach using the virtual class prompt, _i.e._, a node is labeled as the class that its embedding is most closely aligned with. ## 6. Conclusion In this paper, we propose HetGPT, a general post-training prompting framework to improve the node classification performance of pre-trained heterogeneous graph neural networks. Recognizing the prevalent issue of misalignment between the objectives of pretext and downstream tasks, we craft a novel prompting function that integrates a virtual class prompt and a heterogeneous feature prompt. Furthermore, our framework incorporates a multi-view neighborhood aggregation mechanism to capture the complex neighborhood structure in heterogeneous graphs. Extensive experiments on three benchmark datasets demonstrate the effectiveness of HetGPT. For future work, we are interested in exploring the potential of prompting methods in tackling the class-imbalance problem on graphs or broadening the applicability of our framework to diverse graph tasks, such as link prediction and graph classification. Figure 4. Performance of HetGPT with the different number of basis feature vectors on ACM, DBLP, and IMDB. Figure 5. Comparison of training losses over epochs between HetGPT and its fine-tuning counterpart on DBLP and IMDB. Figure 3. Ablation study of HetGPT on ACM and IMDB. Figure 6. Visualization of the learned node tokens and class tokens in virtual class prompt on ACM and DBLP.
2308.11334
DeepBurning-MixQ: An Open Source Mixed-Precision Neural Network Accelerator Design Framework for FPGAs
Mixed-precision neural networks (MPNNs) that enable the use of just enough data width for a deep learning task promise significant advantages of both inference accuracy and computing overhead. FPGAs with fine-grained reconfiguration capability can adapt the processing with distinct data width and models, and hence, can theoretically unleash the potential of MPNNs. Nevertheless, commodity DPUs on FPGAs mostly emphasize generality and have limited support for MPNNs especially the ones with lower data width. In addition, primitive DSPs in FPGAs usually have much larger data width than that is required by MPNNs and haven't been sufficiently co-explored with MPNNs yet. To this end, we propose an open source MPNN accelerator design framework specifically tailored for FPGAs. In this framework, we have a systematic DSP-packing algorithm to pack multiple lower data width MACs in a single primitive DSP and enable efficient implementation of MPNNs. Meanwhile, we take DSP packing efficiency into consideration with MPNN quantization within a unified neural network architecture search (NAS) framework such that it can be aware of the DSP overhead during quantization and optimize the MPNN performance and accuracy concurrently. Finally, we have the optimized MPNN fine-tuned to a fully pipelined neural network accelerator template based on HLS and make best use of available resources for higher performance. Our experiments reveal the resulting accelerators produced by the proposed framework can achieve overwhelming advantages in terms of performance, resource utilization, and inference accuracy for MPNNs when compared with both handcrafted counterparts and prior hardware-aware neural network accelerators on FPGAs.
Erjing Luo, Haitong Huang, Cheng Liu, Guoyu Li, Bing Yang, Ying Wang, Huawei Li, Xiaowei Li
2023-08-22T10:17:53Z
http://arxiv.org/abs/2308.11334v1
DeepBurning-MixQ: An Open Source Mixed-Precision Neural Network Accelerator Design Framework for FPGAs ###### Abstract Mixed-precision neural networks (MPNNs) that enable the use of just enough data width for a deep learning task promise significant advantages of both inference accuracy and computing overhead. FPGAs with fine-grained reconfiguration capability can adapt the processing with distinct data width and models, and hence, can theoretically unlenc the potential of MPNNs. Nevertheless, commodity DPUs on FPGAs mostly emphasize generality and have limited support for MPNNs especially the ones with lower data width. In addition, primitive DSPs in FPGAs usually have much larger data width than that is required by MPNNs and haven't been sufficiently co-explored with MPNNs yet. To this end, we propose an open source MPNN accelerator design framework specifically tailored for FPGAs. In this framework, we have a systematic DSP-packing algorithm to pack multiple lower data width MACs in a single primitive DSP and enable efficient implementation of MPNNs. Meanwhile, we take DSP packing efficiency into consideration with MPNN quantization within a unified neural network architecture search (NAS) framework such that it can be aware of the DSP overhead during quantization and optimize the MPNN performance and accuracy concurrently. Finally, we have the optimized MPNN fine-tuned to a fully pipelined neural network accelerator template based on HLS and make best use of available resources for higher performance. Our experiments reveal the resulting accelerators produced by the proposed framework can achieve overwhelming advantages in terms of performance, resource utilization, and inference accuracy for MPNNs when compared with both handcrafted counterparts and prior hardware-aware neural network accelerators on FPGAs. DSP packing, mixed precision neural network, neural network architecture search, quantization and implementation co-optimization. ## I Introduction Quantization is a straightforward yet effective approach [1][2] to compress neural network models that are usually both computing- and memory-intensive. Since neural network's sensitivity to quantization varies across the layers, mixed-precision quantization that allows more fine-grained quantization achieves significant advantages in both computing efficiency and memory access efficiency compared to classical uniform quantization [3][4], which contributes to the neural network processing throughput and energy efficiency eventually. Nevertheless, most of the commodity neural network computing engines including CPU, GPU, and NPU usually have limited support for arbitrary mixed-precision neural network (MPNN) processing especially low data width processing due to the lack of native mixed precision computing elements. In contrast, FPGAs with fine-grained reconfiguration capability can provide model specific implementation [5][6] and suit various computing requirements of MPNNs with native hardware, and hence, can unleash the potential of MPNNs. Despite the fine-grained reconfiguration capability, FPGAs mainly rely on digital signal processing unit (DSP) cores with limited reconfiguration for efficient arithmetic implementations. For example, Xilinx integrates DSP48E2 which can support a 27 \(\times\) 18 two's complement multiplication on its UltraScale FPGAs [7], while Intel Arria 10 devices have DSP cores that can be configured to two 18 \(\times\) 19 multipliers or one 27 \(\times\) 27 multiplier [8]. Although Lookup-Tables (LUTs) can also be utilized in implement arithmetic operations with arbitrary data width, DSPs generally outperform the LUT-based implementations in terms of both latency and energy efficiency [9] especially for higher data width operations. While MPNNs typically involve many low data width operations that are much smaller than data width of primitive DSPs, straightforward implementation of MPNNs can result in considerable waste of the DSP resources. To address the problem, a variety of works have attempted to pack multiple low data width operations into a single DSP block [10][11][12] and make full use of the computing capability of the DSPs. For instance, Xilinx's INT8 [12] and INT4 [11] demonstrate that two 8-bit multiplications and four 4-bit multiplications can be fit into a single DSP48E2. The authors in [10] also explored the use of high data width processing engines for low data width processing. While MPNN quantization affects not only the accuracy but also the DSP packing efficiency and performance eventually, performing the quantization and DSP packing separately will lead to sub optimal results. In fact, there have been intensive efforts devoted to optimize the neural network model accuracy and performance at the same time. For instance, [13] employs a RL agent with direct hardware metrics feedback to co-optimize accuracy, latency, and energy consumption. [14] incorporates quantization and other neural architecture parameters into a design space, and has gradient-decent method for optimizing both algorithms and hardware implementation. However, there is still a lack of co-optimization between DSP packing and MPNN quantization. In this work, we propose a systematic DSP packing algorithm that can squeeze multiple low data width arithmetic operations into a single primitive DSP of FPGAs. While the DSP packing efficiency varies substantially across the convolution kernels with different parameters, the overall performance of a MPNN network can be inconsistent with the data width of the model. Then, we leverage a differentiable NAS to take both the DSP packing and MPNN quantization into consideration and co-optimize the model accuracy and performance at the same time. Finally, with the optimized DSP packing and quantization determined by the NAS, we have a MPNN accelerator generated automatically based on a fully pipelined neural network accelerator template. The major contributions of this work can be summarized as follows: * We propose a mixed DSP packing algorithm that takes advantage of both _Kernel Packing_ strategy and _Filter Packing_ strategy for arbitrary low data width convolution on FPGAs. In addition, we also enhance the DSP packing with _Overpacking_ technique and _Operation Separation_ technique. The resulting DSP packing algorithm outperforms all the existing strategies significantly. * On top of the proposed DSP packing algorithm, we propose a MPNN accelerator design framework that leverages a differentiable NAS to take both the DSP packing and quantization into a consideration and optimizes the model accuracy and performance at the same time. With the optimized DSP packing strategy and quantization setups, this framework can further generate high-performance MPNN accelerator based on a fully pipelined HLS template. The framework is open sourced on Github1. Footnote 1: [https://github.com/fffasttime/AnyPackingNet/](https://github.com/fffasttime/AnyPackingNet/) * According to our experiments on a set of different MPNNs, the MPNN accelerators generated with the proposed framework outperform state-of-the-art counterparts in terms of performance, resource utilization, and prediction accuracy significantly. ## II Related Work Mixed-precision neural networks (MPNNs) that enable just enough data width for each different neural network layer can greatly reduce the requirements of computing, memory bandwidth, and storage. Therefore, MPNNs promise great computing efficiency when compared to neural network models with unified data width. Since the number of low bit-width operations is inconsistent with the realistic computing efficiency due to the lack of primitive MPNN implementation on existing computing engines including CPUs, GPUs, and NPUs. Many prior work seek to co-optimize the quantization and computing efficiency [15][16] to ensure efficient MPNN implementation on specific computing engines. Specifically, [13][17][18][19][20][21] leverage network architecture search (NAS) technique that automates the neural network design for the co-optimization. For instance, [20] has a hardware performance model added to differentiable NAS such that performance of neural network candidates on GPUs can be evaluated with the model accuracy at the same time. Similar approaches have also been successfully applied to lightweight neural network design for mobile phones [22]. However, the above hardware aware neural network design and optimization approaches essentially adapt the neural network models to the target computing engines that have fixed computing architectures and have little native low data width operation support, and hence, fail to fully unleash the potential of MPNNs. In contrast, FPGAs with fine-grained reconfiguration capability are suitable for model specific customization and can be a good fit for MPNNs. To explore the reconfiguration capability of FPGAs for efficient neural network specific implementation, the authors in [3][13][14] proposed different co-optimization approaches that take both neural network accuracy and hardware implementation efficiency into consideration in a unified framework. [13] applies time-consuming reinforcement learning NAS. Although such NAS approach could have the accelerator design parameters included in the same NAS search space along with the network architecture and have different design metrics considered at the same time during NAS evaluation stage, it enlarges the search space dramatically and usually induce many time-consuming evaluation of metrics such as accuracy, hardware overhead, and implementation quality, which makes the entire optimization prohibitively expensive and difficult to converge. [14] formulates the co-optimization as a differentiable NAS problem in terms of both accuracy and implementation efficiency. It avoids the conventional iterative NAS search procedures and reduces the optimization to a standard training procedure. Essentially, it makes the hardware-aware optimization much easier to converge. However, it is difficult to explicitly represent and model all hardware design parameters within a differentiable NAS framework, as many hardware parameters are discrete, making it difficult to construct differentiable proxy loss and obtain global optimal solutions. Different from these above NAS approaches, [3] takes the co-optimization as a _integer programming_ problem by introducing an accuracy predictor and a performance predictor, which do not rely on any complex black box simulation or evaluation procedures. Particularly, it demonstrates the great potential of using a simplified co-optimization framework for neural network acceleration on FPGAs, but the accuracy prediction can be relatively limited to some specific scenarios. In addition, the fine-grained reconfiguration capability of FPGAs has not been explored sufficiently. FPGAs mainly rely on primitive DSP blocks with fixed data width for high performance arithmetic operations while straightforward implementation of low data width such as 2-bit and 3-bit operations on these primitive DSP blocks can lead to considerable waste because the data width of these DSP blocks is much larger. LUTs in FPGAs can also be utilized to construct operations with arbitrary data width, but the performance is usually much lower especially for operations with larger data width due to the more complex routing. To fully explore the fine-grained reconfiguration capability of FPGAs, researchers from both academia and industry have proposed a variety of approaches to pack multiple low-precision operations into a single DSP (especially multiplier) concurrently to fully utilize the primitive DSPs [9][10][11][12][23] and then extract the outputs of the low-precision operations from the disjoint bit segments of the DSP blocks concurrently. For instance, Xilinx INT4 optimization [11] leverages the \(27\times 18\) two's complement multiplier to simultaneously calculate four 4-bit products. [24][25] proposes to construct efficient low data width matrix-matrix multiplication overlay based on primitive DSP blocks. Particularly, it takes the interconnection between primitives into consideration for the sake of higher operation frequency. Then, it has neural network models implemented based on the overlay to ensure high-performance neural network processing on FPGAs. Essentially, it optimizes the hardware implementation first without being aware of the models and adapts the model to the FPGA implementation afterwards. HiKonv [10] specifically explores the DSP packing algorithm to maximize low data width operations on a single DSP block, but there is a lack of co-optimization of the hardware implementation and neural network models. In summary, there is still a lack of co-optimization framework that takes the neural network model accuracy and fine-grained FPGA implementation efficiency into consideration at the same time for FPGAs. ## III MPNN Accelerator Design Framework In this work, we propose a mixed-precision neural network accelerator design framework as shown in Fig. 1. Essentially, it is a hardware and software co-optimization framework for MPNNs and seeks to optimize both the processing performance and accuracy. In general, it adjusts the quantization of MPNNs to fulfill the accuracy requirement and optimize the resulting accelerator implementation which mainly relies on the various low data width operation mapping efficiency over primitive DSPs within FPGAs. The framework starts with a floating point neural network model or fixed point model with unified quantization. With the model architecture, it utilizes differentiable NAS to determine the optimized MPNN quantization. Specifically, it defines the quantization search space in which the data width of the model in each layer ranges from 2bit to 8bit. Based on the search space, it further extends the input neural network model by adding all the possible candidate quantization branches to all the links between layers in the original input neural network and constructs a super-net for the NAS. The branches in the super-net are weighted and they can be adapted to optimize the model accuracy through back propagation. Other than the accuracy, it also has the NAS to be aware of the hardware implementation efficiency especially the DSP requirements which is usually the resource bottleneck. While the hardware implementation relies on the model quantization in each layer as well as the low data width operation mapping efficiency over primitive DSPs in FPGAs, a _DSP Packing Optimizer_ is utilized to pack the various low data width operations of MPNNs within primitive DSPs. Then, we have the optimal DSP packing configurations produced by _DSP Packing Optimizer_ stored in a lookup table such that they can be referred to immediately during NAS and utilized to co-optimize the MPNN accuracy and DSP overhead. Basically, we utilize a combined accuracy and DSP overhead loss to train the super-net where the accuracy loss is obtained through standard forward processing and DSP overhead is evaluated based on the lookup table of the DSP packing. At the end of the super-net training, the branches with highest weights will be selected as the optimized quantization configurations and DSP packing configurations accordingly. After the DSP-aware quantization, the framework proceeds to the accelerator customization stage. Essentially, it orchestrates the design parameters of our pipelined neural network accelerator templates based on the optimized quantization and DSP packing configurations of the neural network model for the sake of higher performance under the specified FPGA resource constraints. Essentially, it is an resource allocation problem that allocates the hardware resources to different pipeline stages such that the implementation of each pipeline stage is optimized and the performance of the different pipeline stages are balanced at the same time. To address the constrained resource allocation problem, we have a _dynamic Fig. 1: Overview of the proposed mixed-precision neural network accelerator design framework. _programming_ algorithm to tune the design parameters of the pipelined accelerator. The tuning procedure requires a large number of evaluation of different design options and the evaluation metrics can be hardware overhead like DSPs and timing quality, which are prohibitively expensive using standard tools from FPGA vendors. In this work, we utilize a _Bayesian Ridge Regression_ predictor to estimate resource utilization and the timing of each pipeline stage implementation. Although it needs additional sampling data for pre-training of the predictors, the resulting prediction models can be orders of magnitude faster and ensures rapid accelerator customization of a specific MPNN. ## IV DSP Packing Optimizer In order to efficiently map mixed-precision arithmetic operations onto primitive DSPs in FPGAs, we propose our _DSP Packing Optimizer_ which further generalizes the state-of-the-art DSP packing algorithms [10][11][12] as two optional strategies and also incorporates two additional techniques for further enhancement. For a convolution operator, the optimizer traverses all possible packing configurations to find the optimal one for each bit-width combination, and stores it in lookup tables to direct quantization search and hardware customization. ### _DSP Packing Strategies_ #### Iv-A1 Kernel Packing For the first strategy, as illustrated in Fig. 2(a), weights and activations respectively from adjacent kernels and pixels are squeezed into one DSP following Eq. 1. Specifically, through left-shift and addition, \(N_{d}\)\(d_{b}\)-bit weights (activations) and \(N_{e}\)\(e_{b}\)-bit activations (weights) are mapped onto two input ports, D and E, where we assume port E's bit-width is larger or equal to port D's (i.e. \(P_{b}^{E}\geq P_{b}^{D}\)), then \(N_{d}\times N_{e}\) independent multiplications are concurrently performed within one DSP. The \(g_{b}\) in Eq. 1 is called guard bits, which are deliberately preserved to prevent bit segment overlap between neighbor multiplications or to support overall result accumulation for saving decoding logic. \[(\sum_{i=0}^{N_{d}-1}d[i]2^{ip_{b}})\cdot(\sum_{j=0}^{N_{e}-1}e[j]2^{jN_{d}p_{b }}) \tag{1}\] subject to \[\begin{cases}p_{b}=d_{b}+e_{b}+g_{b}\\ g_{b}\geq 0\\ d_{b}+(N_{d}-1)p_{b}\leq P_{b}^{D}\\ e_{b}+(N_{e}-1)N_{d}p_{b}\leq P_{b}^{E}\end{cases}\] #### Iv-A2 Filter Packing Instead of packing weights from adjacent kernels, _Filter Packing_ factorizes multidimensional convolution into 1-D counterparts, and packs the filter on an input port. Suppose there are a \(K_{p}\)-element weight filter \(f\) and a \(N_{p}\)-element activation sequence \(s\). Based on the mathematical equivalence between 1-D convolution and polynomial multiplication, the convolution (\(c=f*s\)) can be reformulated as polynomial representation, as shown in Eq. 2, which can then be implemented with one large bit-width multiplier. However, in practice, due to the limitation of port widths (i.e. \(P_{b}^{A}\) and \(P_{b}^{W}\)), one multiplier cannot contain a large convolution entirely. Therefore, for processing a long \(K\)-element filter and a long \(N\)-element sequence, this strategy can be generalized by dividing the original polynomial into \(\lceil\frac{K}{K_{p}}\rceil\times\lceil\frac{N}{N_{p}}\rceil\) sub-tasks. As illustrated in Fig. 2(b), through iteratively calculating these sub-tasks and accordingly accumulating intermediate coefficients, correct convolution can be obtained. In this case, the guard bits must be greater than \(\lceil\log_{2}min\{K_{p},N_{p}\}\rceil\), since \(\min\{K_{p},N_{p}\}\) accumulations are inherently introduced by its polynomial nature. \[\begin{split} c(2^{p_{b}})=& f(2^{p_{b}})s(2^{p_{b }})=(\sum_{i=0}^{K_{p}-1}f[i]2^{ip_{b}})\cdot(\sum_{j=0}^{N_{p}-1}s[j]2^{jp_{b }})\\ =&\sum_{i=0}^{K_{p}+N_{p}-1}c[i]2^{ip_{b}}\\ \end{split} \tag{2}\] subject to \[\begin{cases}p_{b}=a_{b}+w_{b}+g_{b}\\ g_{b}\geq\lceil\log_{2}min\{K_{p},N_{p}\}\rceil\\ a_{b}+(N_{p}-1)p_{b}\leq P_{b}^{A}\\ w_{b}+(K_{p}-1)p_{b}\leq P_{b}^{W}\end{cases}\] #### Iv-A3 Mixed Packing Compared with _Kernel Packing_ that leaves larger space (i.e. \(N_{d}p_{b}\)) between the operands on port Fig. 2: DSP packing strategies: (a) Kernel Packing; (b) Filter Packing. Suppose both weights and activations are unsigned. E to guarantee multiplications' independence, _Filter Packing_ can pack operands more densely. However, for small kernel size, this advantage might not be unleashed as the filter cannot fully occupy DSP's port width (e.g. point-wise convolution). For fairly comparing different strategies and configurations, we use two metrics to evaluate our DSP packing strategies. Primarily, we expect to maximally accommodate multiplications into these DSP primitives for highest multiplication throughput. However, we do not intuitively define it as the total number of concurrent multiplications in one DSP, because this neglects the up-rounding redundancy introduced by sub-task division in _Filter Packing_, especially the division for filter whose length is shorter and can lead to severe waste. Hence, we define the multiplication throughput \(T_{mul}\) as the average of the effective multiplication performed in one DSP, as shown in Eq. 3. Next, based on the consideration that guard bits can be utilized for supporting accumulations before decoding, we also expect our optimizer to increase it. As _Filter Packing_ inherently introduces accumulations, we define extra guard bits \(E_{g}\) with Eq. 4, and regard it as the subsidiary optimization objective. \[T_{mul}=\begin{cases}N_{d}N_{e},&\text{\emph{Kernel Packing}}\\ \frac{KN}{\lceil\frac{N}{\mathcal{R}_{p}}\rceil\lceil\frac{N}{\mathcal{R}_{p}} \rceil},&\text{\emph{Filter Packing}}\end{cases} \tag{3}\] \[E_{g}=\begin{cases}g_{b},&\text{\emph{Kernel Packing}}\\ g_{b}-\lceil\log_{2}\min\{K_{p},N_{p}\}\rceil,&\text{\emph{Filter Packing}} \end{cases} \tag{4}\] ### _Packing Enhancement_ As explained in Eq. 1 and Eq. 2, multiplication throughput is restricted by both guard bits and port width constrains. In order to further enhance it, our optimizer also introduces two enhancement techniques. #### Iv-B1 1-bit Overpacking The guard bits in packing algorithms are deliberately preserved to avoid result overlap. Nonetheless, they inevitably consume the precious bit-widths. Inspired by [9], we introduce _1-bit overpacking_ (i.e. allowing 1-bit overlap) to mitigate this constrain, and further propose a method to fully compensate the error. When 1-bit overlap is permitted, the LSB of high-position segment \(B_{LS}^{H}\) and the MSB of low-position segment \(B_{MS}^{L}\) overlaps with each other. In order to decode correctly, we recalculate \(B_{LS}^{H}\) with additional LUT resources, and leverage it for calibration. As shown in Fig. 3, the LSB of a product can be obtained by applying an AND gate to the LSBs of its multiplicants. If the high-position segment is the sum of several products, we apply an XOR logic to all products' LSBs to calculate \(B_{LS}^{H}\). For correcting \(B_{MS}^{L}\), we directly add the recalculated \(B_{LS}^{H}\) to it to counteract the contamination. For high-position segment, the error is possibly introduced by the sign extension of the low-position segment, which is equivalent to subtracting the result by one. To detect and compensate the unknown extension, we add the XOR of the overlapped bit and \(B_{LS}^{H}\) for correction. #### Iv-B2 Operand Separation In packing algorithms, we expect to find suitable \(K_{p}\) and \(N_{p}\) (or \(N_{d}\) and \(N_{e}\)) combinations that can fully occupy the precious port widths. However, as these values are discrete, it's unavoidable to leave some bit-width unused. This phenomenon is especially conspicuous for large bit-width combinations, since the available packing choices are more limited. Therefore, we propose using _Operand Separation_ to alleviate it. The basic idea is to split a high bit-width operand (weight or activation) into two low bit-width counterparts. Take _Filter Packing_ as an example. Following Eq. 5, a \(w_{b}\)-bit filter \(f[k]\) can be split into a \((w_{b}-\lceil w_{b}/2\rceil)\)-bit \(f_{H}[k]\) and a \((\lceil w_{b}/2\rceil)\)-bit \(f_{L}[k]\). Then, the original polynomial multiplication can be reformulated as two low bit-width counterparts, and be separately implemented with two multipliers. Clearly, the separation halves the number of concurrent multiplications in one DSP. However, the resulting smaller bit-widths might be able to occupy input ports more densely, thus might serve to further enhance multiplication throughput. \[\begin{split} c(2^{p_{b}})&=\sum_{i=0}^{K_{p}-1} \sum_{j=0}^{N_{p}-1}f[i]s[j]2^{(i+j)p_{b}}\\ &=\sum_{i=0}^{K_{p}-1}\sum_{j=0}^{N_{p}-1}(f_{H}[k]2^{\lceil\frac {w_{b}}{2}\rceil}+f_{L})s[j]2^{(i+j)p_{b}}\\ &=2^{\lceil\frac{w_{b}}{2}\rceil}f_{H}(2^{p_{b}})s(2^{p_{b}})+f_ {L}(2^{p_{b}})s(2^{p_{b}})\end{split} \tag{5}\] ## V DSP-aware Quantization Inspired by prior works [14][19], we mainly leverage the differentiable NAS technique for hardware-efficient quantization of MPNNs. As DSP is usually the resource bottleneck of NN accelerators on FPGAs [5][9][14], it is utilized as the major hardware metric in the hardware-aware NAS. Given input model \(\mathcal{A}\), a super-net with candidate quantization branches as shown in Fig. 1 will be generated. Each branch is assigned an architecture parameter \(\pi_{i}\) to represent selection probability. The accuracy and hardware evaluation metrics are then formulated as differentiable functions with respect to these parameters. For inference accuracy, each layer's weight and activation candidate branches are weighted by these selection parameters in propagation such that the mixed accuracy loss \(Loss_{acc}(\pi^{w},\pi^{a})\) is calculated. For hardware-efficiency, to fully exploit the critical DSP resources on FPGAs, we regard the packed multiplications as one DSP operation, and propose to use the total DSP operations of all layers for evaluation, as defined in Eq. 6, where \(Op_{mul}^{l}\), \(Q_{w}^{l}\), and \(Q_{a}^{l}\) denote the number of multiplication operations and quantization bit-widths in the \(l^{th}\) layer respectively. To orchestrate it with the super-net, each layer's multiplication Fig. 3: 1-bit overpacking correction. throughput lookup table \(T^{l}_{mul}\) is weighted with the corresponding selection probability as shown in Eq. 7. The complexity loss \(Loss_{comp}(\pi^{w},\pi^{a})\) is therewith defined as the probability expectation of the total DSP operations, and is added to accuracy loss with an adjustable hyper-parameter \(\eta\) for tuning relative significance as shown in Eq. 9. In back-propagation, the two loss functions are minimized based on target dataset until converge or reaching certain epochs, through which the bit-width selections are consequently optimized. In the end, the sub-path with highest selection probability is chosen as the finalized quantization configurations. \[Op_{dsp}=\sum_{l=1}^{L}\frac{Op^{l}_{mul}}{T^{l}_{mul}(w^{b}_{b},\hat{a}^{l} _{b})} \tag{6}\] \[\overline{T^{l}_{mul}}(\pi^{w},\pi^{a})=\sum_{w_{b}\in Q_{u}}\sum_{a_{b}\in Q _{a}}\pi^{w}_{i}\pi^{a}_{j}T^{l}_{mul}(w_{b},a_{b}) \tag{7}\] \[Loss_{comp}(\pi^{w},\pi^{a})=\sum_{l=1}^{L}\frac{Op^{l}_{mul}}{T^{l}_{mul}( \pi^{w},\pi^{a})} \tag{8}\] \[Loss(\pi^{w},\pi^{a})=Loss_{acc}(\pi^{w},\pi^{a})+\eta Loss_{comp}(\pi^{w},\pi^ {a}) \tag{9}\] ## VI Accelerator Customization We use HLS to design hardware templates based on our DSP packing algorithms. After quantization search, the framework will automatically configure the templates and map each layer as a pipeline stage. Each stage will be assigned \(Pf^{l}\) DSPs to construct a parallel computing array similar with [5]. However, as LUTs can also be efficient computing resources when bit-width is small [3][14], our design also provides an alternative to construct computing arrays with equivalent LUT arithmetic, but it must satisfy timing constrain. Given the maximum DSPs \(R^{max}_{dsp}\) and LUTs \(R^{max}_{lut}\), we formulate the accelerator customization problem as Eq. 10. Here, as pipeline stages are connected through FIFOs, we assume the overall Worst Negative Slack (WNS) is determined by the worst stage. To solve above problem, we randomly sample and synthesize a set of possible hardware configurations with our templates, and pre-train a _Bayesian Ridge Regression_ model to estimate each stage's DSPs \(\hat{R}^{l}_{dsp}\), LUTs \(\hat{R}^{l}_{lut}\), and WNS \(\hat{t}^{l}_{wns}\). Next, we propose utilizing _dynamic programming_ to find the optimal resource allocation, as shown in Algorithm 1. It employs a three-dimension table to memorize the optimal pipeline configurations of a sub-problem, and leverages recurrence relation to solve a larger sub-problem. Specifically, we use \(Lat[l][R^{cur}_{dsp}][R^{cur}_{lut}]\) to represent the minimal latency when \(R^{cur}_{dsp}\) DSPs and \(R^{cur}_{lut}\) LUTs are available for the first \(l\)-stage pipeline of the entire design. If \(\hat{R}^{l}_{dsp}\) out of the \(R^{cur}_{dsp}\) DSPs and \(\hat{R}^{l}_{lut}\) out of the \(R^{cur}_{lut}\) LUTs are allocated for the \(l^{th}\) stage, the recurrence relation can be formulated as Eq. 11. Based on this, we can search for the optimal solution in bottom-up fashion. The time complexity of the proposed algorithm only grows linearly with the depth and resource scale, which means a thorough design space exploration is completely affordable. \[\begin{split}\min& Lat=\max\{Lat^{l}\}=\max\{\frac{ Op^{l}_{dsp}}{Pf^{l}}\}\\ s.t.&\sum_{l=1}^{L}R^{l}_{dsp}\leq R^{max}_{dsp},\ \sum_{l=1}^{L}R^{l}_{lut}\leq R^{max}_{lut}\\ &\min\{l^{l}_{wns}\}>0\\ & Lat[l][R^{cur}_{dsp}][R^{cur}_{lut}]=\max\{\frac{Op^{l}_{dsp}}{ Pf^{l}},Lat^{pre}\}\end{split} \tag{10}\] where \[\begin{split} Lat^{pre}&=Lat[l-1][\hat{R}^{pre}_{dsp} ][\hat{R}^{pre}_{lut}]\\ \hat{R}^{pre}_{dsp}&=R^{cur}_{dsp}-\hat{R}^{l}_{dsp}\\ \hat{R}^{pre}_{lut}&=R^{cur}_{lut}-\hat{R}^{l}_{lut} \end{split}\] ``` 1 Initialize all \(Lat=\max\{Op^{l}_{dsp}\}\) and Config. 2for\(l=1,2,\ldots,L\)do 3for\(R^{cur}_{dsp}=0\) to \(R^{max}_{dsp}\)do 4for\(R^{cur}_{lut}=0\) to \(R^{max}_{lut}\)do 5forall possible \(Pf^{l}\)do 6 Estimate \(\hat{R}^{l}_{dsp}\), \(\hat{R}^{l}_{lut}\), and \(\hat{t}^{l}_{wns}\) 7 \(Lat^{new}=\max\{\frac{Op^{l}_{dsp}}{Pf^{l}},Lat^{pre}\}\) 8 \(C1=Lat^{new}<Lat[l][R^{cur}_{dsp}][R^{cur}_{lut}]\) 9 \(C2=\hat{R}^{l}_{dsp}\leq R^{cur}_{dsp}\) & \(\hat{R}^{l}_{lut}\leq R^{cur}_{lut}\) 10 \(C3=\hat{t}^{l}_{wns}>0\) 11if\(C1\&C2\&C3\)then 12 update \(Lat[l][R^{cur}_{dsp}][R^{cur}_{lut}]\) and corresponding Config. 13 14 15 Return \(Lat[L][R^{max}_{dsp}][R^{max}_{lut}]\) and Config. ``` **Algorithm 1**Accelerator Customization ### _Experiment Setup_ We evaluate our framework with two datasets, the single object detection dataset in Design Automation Conference System Design Contest (DAC-SDC) [26] and CIFAR-10. For DAC-SDC, we deploy two commonly-adopted models, UltraNet [27] and SkyNet [28], with full pipeline structure, and target the bit-width selection from top-3 teams as the baseline. In order to fairly compare with previous results, we retrain and evaluate all models with the same hyper-parameter on the DAC-SDC public dataset. For CIFAR-10, we adopt a VGG-alike model (denoted as VGG-Tiny) with 6 convolution layers and one fully-connected layer, and manually design the bit-width selection as the baseline. For deployment, we target an embedded FPGA Ultra96-V2 (with 360 DSPs), and measure throughput, inference accuracy, utilization, and energy consumption for evaluation. In experiments, test frames are loaded into the DDR in advance and are then transferred to the accelerator by DMA. The throughput and energy consumption during inference are then measured following DAC-SDC 20222. The inference accuracy is calculated with Intersection-Over-Union (IOU) for DAC-SDC and top-1 accuracy for CIFAR-10. Footnote 1: [https://github.com/jgoeders/dac_sdc_2022](https://github.com/jgoeders/dac_sdc_2022) Footnote 2: [https://github.com/jgoeders/dac_sdc_2021_designs/tree/main/iSmart](https://github.com/jgoeders/dac_sdc_2021_designs/tree/main/iSmart) ### _DSP Packing Efficiency Comparison_ We start with our _DSP Packing Optimizer_. As an illustrative example, Fig. 4 compares a \(3\times 3\) kernel's multiplication throughput lookup tables that are respectively searched by HiKonv [10] and our optimizer. As can be seen, our optimizer achieves higher multiplication throughput in 25 out of the 49 combinations. Furthermore, for \(1\times 1\) and \(5\times 5\) kernels, 16 and 27 cases witness throughput improvement respectively. As for the LUT overhead, we randomly sample and synthesize 30 optimized combinations. The results indicate that only 16.4 extra LUTs are required on average. These results demonstrates the effectiveness of our _DSP Packing Optimizer_ in generating more efficient DSP packing configurations. ### _Bit-width Search Results_ #### Iv-C1 NAS Comparison To assess our DSP-aware quantization search, we compare it with EdMIPS [19] on UltraNet. Following Eq. 6, we target DSP operations as the proxy signal to balance inference accuracy and DSP operations, while EdMIPS uses the product of activation bit-width and weight bit-width to formulate computation complexity loss. Through adjusting hyper-parameter \(\eta\), different solutions can be obtained as the relative significance changes, as illustrated in Fig. 5. Clearly, our method produces a pareto-optimal curve with respect to both accuracy and DSP operations, which demonstrates our metrics can effectively direct the NAS to conduct DSP-aware quantization. #### Iv-C2 Bit-width Selection Comparison Then, we compare the searched bit-width settings with the manually crafted counterparts in Fig. 6. For UltraNet, we compare with iSmart3 (2nd in DAC-SDC 2021). They adopt high bit-width in the first and last layers, and apply 4-bit weight and activation in the rest layers such that they can leverage one DSP to pack 6 multiplications. Similar quantization scheme is also applied to VGG-tiny. For SkyNet4, we quantize the weight and activation to 5 bits and 8 bits by referring to SkrSkr (1st in DAC-SDC 2021), and squeeze two multiplications into one DSP. Footnote 4: [https://github.com/jgoeders/dac_sdc_2021_designs/tree/main/iSmart](https://github.com/jgoeders/dac_sdc_2021_designs/tree/main/iSmart) The experiment results demonstrate that our NAS can effectively optimize both inference accuracy and DSP operations. For accuracy, mixed-precision UltraNet, SkyNet, and VGG-Tiny respectively achieve IOU or top-1 accuracy of \(74.29\%\), \(75.44\%\) and \(91.36\%\), while the accuracy of handcrafted designs are \(73.37\%\), \(74.85\%\) and \(91.45\%\). Except for a trivial loss for VGG-Tiny, the other two models witness non-trivial accuracy improvement. As for computation complexity, \(27.12\%\), \(44.10\%\) and \(42.71\%\) of the DSP operations are reduced. The optimization can be explained as our NAS conducts a more reasonable bit-width allocation for each layer based on its sensitivity and computation complexity. The second and third layer of UltraNet dominate nearly \(60\%\) of the overall MACs. Hence, our NAS adopts ultra-low bit-width combinations to pack 12 multiplications onto one DSP block, and in the meanwhile, increases the bit-widths in behind layers to counteract the accuracy loss. SkyNet mainly consist of six stacked bundles of depth-wise and point-wise convolution. Due to the huge complexity dichotomy between these two types of operators, larger bit-width is preferred for depth-wise convolution. In addition, unlike SkrSkr that uniformly applies larger bit-width to activation than weights, our NAS takes the opposite strategy in several layers. For VGG-Tiny, our NAS nearly choose lower bit-width for every middle layer. This means we underestimate model's tolerance to quantization when manually designing the bit-width settings. ### _Deployment Results_ Then, we deploy these models at different resources constrains to separately compare utilization and throughput. We firstly allocate DSPs as many as possible to deploy the manually crafted models at their highest throughput (MC-HP) that can be supported by the platform. Next, for evaluating utilization, we deliberately reduce the available DSPs to deploy our MPNNs at the nearest throughput (Mix-BP) with the corresponding baselines. Finally, we allow our framework to maximize DSP allocation (Mix-HP). Additionally, we also enable LUT-replacement (Mix-LUT) to further improve performace. All deployment results are summarized in Table I. As can be observed, due to the reduced DSP operations, our mixed-precision models demand less parallel factors to achieve the same FPS. Compared with MC-HP, Mix-BP implementations theoretically reduce 72 (26.9\(\%\)), 70 (41.7\(\%\)) and 96 (36.5\(\%\)) parallel factors respectively. Since DSPs can be mapped to other logic (e.g. batch normalization) as well, the final DSP savings are 100 (29.5\(\%\)), 49 (21.5 \(\%\)), and 100 (36.0 \(\%\)). In terms of throughput, Mix-HP implementations achieve 1.59\(\times\), 1.71\(\times\) and 1.97\(\times\) speedup. For UltraNet and VGG-Tiny, the highest throughput is restricted by the available DSP Fig. 4: DSP packing efficiency comparison for a \(3\times 3\) convolution kernel. Fig. 5: DSP-aware NAS results comparison with EdMIPS [19]. resources, but our SkyNet designs suffer from lack of BRAMs which prevents us from utilizing more DSPs. After enabling LUT-replacement, our framework replaces 128, 64, and 144 DSPs respectively for the three MPNNs. This further boosts the FPS of UltraNet and VGG-Tiny to 2534.20 and 4877.76. For SkyNet, while FPS remains the same, we are surprised to find that power consumption is increased from 1.72 to 3.02 W after replacing a part of DSPs. In Table II, we also compare our designs with prior FPGA-based accelerators that focus on quantization and implementation co-optimization. FILM-QNN [29] restricts quantization choices as either W4A5 or W8A5, and respectively applies INT4 and INT8 for packing optimization. In order to support flexible bit-widths, N3H-Core [30] implements 4-bit multiplication with DSPs, and applies bit-serial LUT-cores for others. Similarly, HAO [3] leverages INT8 for high-precision multiplications and LUTs for those under 4-bit. The champion team of DAC-SDC 22, SEUer [31], adopts the same bit-width and packing strategy as our UltraNet baseline, but they further boost the throughput by replacing the high bit-width arithmetic with LUTs in the first and last layer, which leads to poor WNS as a result. Based on optimized DSP packing strategies, DSP-aware bit-width exploration, and fine-grained resource allocation scheme, our solutions show unparalleled throughput and arithmetic intensity compared with these designs. ## VIII Conclusion In this paper, we present a framework to co-optimize the implementation and quantization of MPNNs on FPGAs. We further optimize the state-of-the-art DSP packing algorithms to support efficient implementation of arbitrary-precision convolution arithmetic. Then, we leverage differentiable NAS for automatically crafting bit-width settings based on a comprehensive assessment of accuracy and DSP operations. Finally, with our fine-grained resources allocation scheme, pipelined accelerators are customized according to specific resource requirements. According to our experiment results, our solutions reveal superior accuracy, resource utilization, and throughput, compared with manually crafted solutions and other related designs. Fig. 6: Bit-width selections of our mixed-precision models and manually crafted counterparts (MC).
2302.13578
Online Black-Box Confidence Estimation of Deep Neural Networks
Autonomous driving (AD) and advanced driver assistance systems (ADAS) increasingly utilize deep neural networks (DNNs) for improved perception or planning. Nevertheless, DNNs are quite brittle when the data distribution during inference deviates from the data distribution during training. This represents a challenge when deploying in partly unknown environments like in the case of ADAS. At the same time, the standard confidence of DNNs remains high even if the classification reliability decreases. This is problematic since following motion control algorithms consider the apparently confident prediction as reliable even though it might be considerably wrong. To reduce this problem real-time capable confidence estimation is required that better aligns with the actual reliability of the DNN classification. Additionally, the need exists for black-box confidence estimation to enable the homogeneous inclusion of externally developed components to an entire system. In this work we explore this use case and introduce the neighborhood confidence (NHC) which estimates the confidence of an arbitrary DNN for classification. The metric can be used for black-box systems since only the top-1 class output is required and does not need access to the gradients, the training dataset or a hold-out validation dataset. Evaluation on different data distributions, including small in-domain distribution shifts, out-of-domain data or adversarial attacks, shows that the NHC performs better or on par with a comparable method for online white-box confidence estimation in low data regimes which is required for real-time capable AD/ADAS.
Fabian Woitschek, Georg Schneider
2023-02-27T08:30:46Z
http://arxiv.org/abs/2302.13578v1
# Online Black-Box Confidence Estimation of Deep Neural Networks ###### Abstract Autonomous driving (AD) and advanced driver assistance systems (ADAS) increasingly utilize deep neural networks (DNNs) for improved perception or planning. Nevertheless, DNNs are quite brittle when the data distribution during inference deviates from the data distribution during training. This represents a challenge when deploying in partly unknown environments like in the case of ADAS. At the same time, the standard confidence of DNNs remains high even if the classification reliability decreases. This is problematic since following motion control algorithms consider the apparently confident prediction as reliable even though it might be considerably wrong. To reduce this problem real-time capable confidence estimation is required that better aligns with the actual reliability of the DNN classification. Additionally, the need exists for black-box confidence estimation to enable the homogeneous inclusion of externally developed components to an entire system. In this work we explore this use case and introduce the neighborhood confidence (NHC) which estimates the confidence of an arbitrary DNN for classification. The metric can be used for black-box systems since only the top-1 class output is required and does not need access to the gradients, the training dataset or a hold-out validation dataset. Evaluation on different data distributions, including small in-domain distribution shifts, out-of-domain data or adversarial attacks, shows that the NHC performs better or on par with a comparable method for online white-box confidence estimation in low data regimes which is required for real-time capable AD/ADAS. ## I Introduction In recent years deep neural networks (DNNs) are increasingly used for advanced driver assistance systems (ADAS) which are deployed in public. Further, DNNs play a key role for full autonomous driving (AD) and enable the most accurate perception, planning, etc. [1]. At the same time, DNNs are quite brittle when the input data does not exactly match the data distribution seen during training. Small domain shifts or the existence of out-of-domain (OOD) data lead to a significant decrease in performance. Such shifts or small corruptions of the data occur naturally and are critical for systems deployed in partly unknown environments like ADAS. Hence, research interest on OOD data started to increase in recent years ([2, 3]). The decrease in performance on OOD data is further amplified by adversarial attacks [4]. These attacks allow an adversary to fool any machine learning system by generating a small perturbation that is applied on the input data of the system. Initially, such attacks applied the perturbation directly on the input image [5]. More crucial for AD/ADAS are attacks that show the possibility of performing similar attacks in the physical world. Here, patches or markings are placed in a physical scene ([6, 7, 8]) and fool the DNN even after the perturbation is captured by the camera system. It also showed that such physical attacks are possible even if the adversary has only strict black-box access to the system [9], meaning any deployed system could be theoretically attacked. Hence, all described data distribution types are relevant for the safe deployment of AD/ADAS and need to be dealt with. However, a standard DNN outputs a high confidence in the predictions on these data distributions, meaning a following planning or control algorithm treats the DNN output as a reliable value. For a correct follow-up decision, it is required that the output confidence actually reflects the real reliability of the DNN also under data distribution shifts. Hence, the estimated confidence should align with the true accuracy and data knowledge of the model. This allows the following algorithms to make an informed decision and not rely on a bad prediction by the DNN. Based on the confidence estimation the following algorithms can for example contact alternative backup systems to override the DNN prediction or engage safety features, like a speed reduction or passing the control to the (safety) driver. Especially, the approach to use different expert models for certain situations is popular [10] and also used by current ADAS systems operating on public roads. However, due to strict timing constraints and limited computational resources it is not possible to run the more complex expert models simultaneously. Therefore, the general DNN must output a meaningful confidence so the controller can decide whether (and which) more complex models are used in the next images of a video stream to generate a specialized understanding of the current scene. In reality the general DNN might be a combination of different systems from various suppliers, where each is focused on the perception of a certain task, e.g. different systems exist for driving space detection and pedestrian detection. To still use the aforementioned concept of situational expert systems depending on the confidence it is required that the confidence estimation of each supplied system is equally meaningful. One way to ensure this, is by using a unique and reliable confidence estimation that is independent of the individual suppliers. This allows the final manufacturer to assess each system individually. Therefore, a model agnostic confidence estimation method is required since it is not possible to enforce a certain confidence training method for each supplier, because each has its own pipelines and architectures that cannot be easily adapted to the requirements of each customer. Additionally, we focus on black-box confidence estimation since systems are typically shared by suppliers in a secret fashion where the customer does not have access or knowledge of the individual architecture, components or gradient flows. Instead, only the final output of the supplied system is made available and can be used by the customer for further computations and decisions. Hence, a model agnostic black-box confidence estimation is required to enable the safe usage of systems from different suppliers. An overview of the different confidence estimation categories and our focus area is shown in Fig. 1. To explore black-box confidence estimation for ADAS we choose a traffic sign recognition (TSR) system as an exemplary ADAS, which is deployed in public by many manufacturers. At the same time, TSR systems are most comparable to work on the confidence estimation of DNNs in the domain of image classification, where most publications are focused on. We choose this use case since it allows to compare current advances from general image classification to our proposed black-box confidence estimation, while still being relevant for ADAS development. This enables us to observe the difference between our black-box method and recent white-box methods. **Our contributions:** * We motivate the need for strict black-box confidence estimation that is real-time capable for AD/ADAS * The neighborhood confidence (NHC) is proposed as a method to perform black-box confidence estimation using limited additional samples * A comparison with the most similar online white-box confidence method is performed for small distribution shifts, full OOD data and adversarial attacks * Our findings show that we achieve an improved or similar performance in low data regimes while only using the black-box output which is required for the motivated usage in AD/ADAS ## II Related Work Most publications regarding the confidence estimation of DNNs use methods that change the underlying architecture or training process. This includes the training of multiple models to use ensembles during online inference [11] or using different (probabilistic) layers ([12, 13]) to output a more meaningful probability distribution than the standard softmax layer. These methods are unsuited for our considered setting because they have to be applied in the training process and cannot be used to determine the reliability of a supplied system independent of the concrete supplier and system. Alternatively, methods exist that do not require the retraining of a model and are added post hoc for inference. Here, one method that also relies on ensembling during inference is Monte-Carlo Dropout [14] which does not require a change to the DNN architecture if dropout [15] is already used during training. In this approach dropout stays active during inference and allows a Bayesian approximation of the confidence. Another method that does not require retraining is temperature scaling ([16, 17]). This improves the calibration of a trained DNN by adding a scaling coefficient to the final logit layer. Furthermore, the combination of a DNN with the k-nearest neighbors algorithm [18] is proposed to estimate the distance of a test data point to the closest train data points. Again, the discussed methods are unsuited for the considered setting because they require an adjustment in the supplied system which a customer typically cannot perform itself. In our considered setting model agnostic methods are needed that are only applied during inference and do not require any change in the architecture or addition of further components [19]. Here, the most similar and recently proposed method is called attribution-based confidence (ABC) [20] which can be applied to any differentiable model and does not require any changes. It uses the pixel-wise attribution to generate perturbed data points. To determine the attribution gradient-based methods are exploited, meaning the ABC requires white-box access to a model. Hence, using this method in practice requires that supplied systems are shared such that the internal computations are observable. This contrasts with the setting we focus on, which considers secretly sharing a system and thus requires to estimate the confidence of a black-box model. Additionally, there exists work specific to the separate data distribution types that we consider in this work. Methods are proposed to specifically detect whether a data point is OOD [21] or perturbed by an adversary [22]. However, such methods are not the focus of our work since we are interested in a single system that can provide a meaningful confidence estimation under different data distributions. This is most useful for the scenario motivated in section I because it allows following control algorithms to perform an appropriate action under various different conditions. Using multiple systems to capture each data distribution type is not possible due to strict requirements on the available computational resources and timing constraints. ## III Neighborhood Confidence We first describe the motivation behind our proposed confidence metric. Then, we formulate an algorithm to compute the basic version of the neighborhood confidence. Additionally, we introduce other concepts that improve the performance of the NHC further. Fig. 1: High-level categories to group confidence estimation methods ### _Motivation_ The basic idea behind the neighborhood confidence is that the classification reliability of a system is higher when a data point lies in the center of a decision region. At this location the data point is classified as reliable as possible in the associated class, because a small perturbation of the data point does not change the result of the classification. Therefore, the confidence of the system should be highest at such data points to show the highest reliability of the classification. Following this line of thought, data points near the decision boundary are classified less reliable and should have a lower confidence. Here, a small perturbation is sufficient to push a data point in a different decision region, which leads to a change in the classification without a meaningful change in the data itself. Hence, the confidence of the system should be lower to show that nearly a different class is predicted and the classification is not very reliable. The described basic concept behind the neighborhood confidence is visualized in Fig. 2 for a two dimensional example with three different classes. Darker colors show a higher value of the classification reliability and consequently an ideal confidence metric should show a similar behavior. ### _Method_ Following the motivation for an ideal confidence metric, a computable black-box metric is needed that reveals how close a decision boundary is to a given data point. Due to the high dimensionality of DNN based systems and input sensors used in reality an ideal metric can only be roughly approximated when considering the strict requirements for AD/ADAS regarding inference time and available computational resources. To perform this approximation, we use a method that tests how robust a classification is under the influence of noise. First, multiple noise perturbations are added on the input data and then the classification is done by the system on all data points. If the system classifies all perturbed data points as the same class as the unperturbed data point, it shows that the input data point is not near a decision boundary. Otherwise, the influence of the noise would have pushed some perturbed data points in a different decision region and thus a different class. Combining the presented ideas, the final neighborhood confidence to assess the classification reliability of a black-box system is the fraction of perturbed data points that is classified as the same class as the original data point. Hence, for a generic classification system \(f(\cdot)\) the neighborhood confidence \(\xi\) for a perturbation strength \(\lambda\) can be calculated as: * Obtain raw data point \(x\in\mathbb{R}^{D}\) * Draw \(N\) noise samples \(n_{0},\ldots,n_{N-1}\in\mathbb{R}^{D}\) from a random distribution * Generate \(N\) perturbed data points \(x^{\prime}_{0},\ldots,x^{\prime}_{N-1}\) with \(x^{\prime}_{i}=x+\lambda n_{i}\) * Classify all data points \(y=[f(x),f(x^{\prime}_{0}),\ldots,f(x^{\prime}_{N-1})]\) * Calculate the NHC as \(\xi=\frac{1}{N}\sum_{i=1}^{N}\{y_{0}==y_{i}\}\) Using this method, the NHC is valued between zero and one where smaller values indicate that a decision boundary is closer. This effect is visualized in Fig. 3 for a simplified two dimensional example. It can be observed that the NHC captures whether a data point is located near the boundary of a decision region or further inside that region. The strength \(\lambda\) can be used to adjust the range of the neighborhood sampling to check whether a decision boundary is nearby. If more classes exist and the decision regions are smaller with decision boundaries closer together \(\lambda\) can be decreased to still have a meaningful sampling procedure. The described algorithm allows for an efficient calculation of the NHC, since the classification of all data points can be done in parallel. This is beneficial for the application in environments where strict timing constraints exists, like it is the case for AD/ADAS. If enough computational resources exist the calculation of the NHC can be done without any relevant overhead, by batching all data points \(x,x^{\prime}_{0},\ldots,x^{\prime}_{N-1}\) together and calculating \(y\) with a single forward pass. To exploit this efficiency \(N\) must be rather small so that the complete batch fits on the computing device and enough memory is available. Therefore, we study the impact of the Fig. 3: Simplified visualization of the resulting neighborhood confidence \(\xi\) with \(N=7\) for a two dimensional example Fig. 2: Simplified visualization of the classification reliability of data points in a decision region for a two dimensional example number of used noise samples in subsection IV-B in low data regimes. Furthermore, it is important to note that the presented NHC is model-agnostic and can be calculated without any adaption for black-box systems. This also holds for hard black-box systems [9] where only the top-1 class is output. No information of the classification system \(f(\cdot)\) or any internal gradients is required. This allows the application to unknown systems from external suppliers which enables the use case presented in section I. ### _Enhancements_ It is possible to enhance the presented basic version of the NHC in different ways depending on the concrete use case. On one hand, different perturbation strengths \(\lambda_{1},\ldots,\lambda_{j}\) can be used at the same time instead of only one. This can be useful if the structure of the decision region is unknown or very uneven. It allows to gather more insights into the structure of the surrounding decision boundaries. Another option is to use the introduced method but specify a concrete reference class for \(y_{0}\) instead of using the class \(f(x)\) that is predicted by the system on the unperturbed data point. This allows to estimate the distance to the decision boundary of the specified class which is useful if a potential misclassification as a certain class is more severe for some of the classes. For instance, in the case of AD/ADAS one wants to ensure that no pedestrian detection is missed. Hence, the pedestrian class can be chosen as reference class \(y_{0}\), which allows to approximate the distance to this class at any time in addition to calculating the normal NHC using \(f(x)\) as reference class. If the resulting value for \(\xi\) is high in the case that the pedestrian class is used as \(y_{0}\) the decision region of the pedestrian class is close and extra care can be taken in subsequent control algorithms. ## IV Experiments To evaluate the performance of the neighborhood confidence we perform qualitative experiments on different data distributions types. The performance is compared with the most similar online white-box method ABC [20] which mainly follows our considered setting by not requiring any adjustments or additions in the TSR system. It is interesting to explore whether the usage of the more detailed information in the white-box method achieves a better performance than the proposed black-box method. Since, we are interested in confidence estimation that is real-time capable we only use the gradient from a single backward pass to estimate the attribution required for the ABC, because this results in the least overhead in inference time. Using more computational expensive methods like integrated gradients [23] as explored in [20] would lead to a significant delay in the time required for inference. This goes against our goals and hence we restrict to use a single backward pass. ### _Setup_ For the DNN based TSR system we choose a standard ResNet-18 architecture [24] that is trained on the German traffic sign recognition benchmark (GTSRB) dataset [25]. Training is performed without additional augmentation and the system achieves a standard accuracy of \(\approx 99.3\,\%\) on the GTSRB final test set. To evaluate the effect of confidence estimation under a small distribution shift in subsection IV-C we generate synthetic images of the same traffic sign classes that are used in the GTSRB dataset. To this end, we take real images of traffic signs and apply various transformations to simulate different environmental conditions. In this way, we generate \(500\) new samples for each of the \(43\) classes. The performance on OOD data is evaluated in subsection IV-D by using the Chinese traffic sign recognition database (TSRD) [26]. Since, we are interested in exploring the performance on full OOD data we drop all images from the TSRD that have a corresponding class in the GTSRB dataset. Thereby, a full OOD dataset of traffic signs is generated where no overlap exists with the classes of the training dataset. Finally, to generate adversarial attacks in subsection IV-E we use the projected gradient descent (PGD) method [27]. This is a strong and standard choice to evaluate the impact of adversarial attacks and is also used by others ([12, 20]). ### _Hyperparameter Study_ The limiting factor of the NHC when used for real-time capable AD/ADAS is the number of used noise samples \(N\). Hence, we visualize the impact of different choices for \(N\) in Fig. 4 for the strength \(\lambda=0.2\). However, the behavior for other strengths is very similar. It shows, that \(N\) mainly represents an option to adjust the granularity of the quantization depending on the use case. This holds for Fig. 3(a) where the standard accuracy of the TSR system is shown in the case that the classification of a data point is disregarded for the calculation of the accuracy when the NHC confidence of this classification is below a threshold. In the case that the confidence threshold equals zero the displayed value is the standard accuracy over all data points, since every classification has a confidence \(\xi\geq 0\). Thus, no classification is disregarded for the accuracy calculation. As soon as a confidence threshold greater than zero is used the classifications of data points where the NHC is below this threshold are disregarded and the accuracy is calculated without those classifications. For example, in Fig. 3(a) one can observe for \(N=10\) that the accuracy is \(\approx 95\,\%\) when only classifications with a confidence \(\xi\geq 0.4\) are taken into account. In this experiment one expects that the accuracy increases when the classifications with lower confidence are increasingly disregarded. At the same time, the quantization effect also appears in Fig. 3(b). Here, cumulative distribution functions (CDFs) are used to visualize the distribution of the confidence when the classification is performed on data points of unknown classes. For example, one can observe for \(N=2\) that \(\approx 60\,\%\) of all classifications have a confidence \(\xi\leq 0.6\) or for \(N=5\) and \(N=7\) that \(\approx 40\,\%\) of all classifications have a confidence \(\xi=0\). Because we evaluate on full OOD data it is impossible that the system outputs the correct class and thus a low confidence for each classification is ideal. From the presented results we take that using \(N=7\) achieves a good tradeoff between the quality of the confidence estimation and reduced computational requirements. Hence, in the following we always use \(N=7\) noise samples for calculating the NHC. To have a fair comparison we also use the same number of samples for the ABC, since we are interested in a comparison under restricted computational requirements. Additionally, we analyze the impact of the noise distribution where the samples are drawn from for the NHC. We find that sampling from a Rademacher distribution consistently provides the best results. A possible explanation is that this distribution allows the neighborhood sampling to search in every dimension with the maximum available strength \(\lambda\) for decision boundaries. Therefore, for the following results the noise samples are always drawn from a Rademacher distribution. ### _In-Domain Distribution Shift_ The first comparison of our proposed neighborhood confidence with the attribution-based confidence is shown in Fig. 5. Similar to Fig. 3(a), the standard accuracy is shown when classifications of data points are disregarded for the accuracy calculation if the confidence of a classification is below a threshold. We use our generated synthetic dataset to examine the performance under a small distribution shift where similar shifts occur naturally when deploying systems for AD/ADAS in only partly known environments. Also, for the NHC different strengths \(\lambda\) are evaluated. All shown confidence metrics pass the basic sanity check since the accuracy increases when the confidence threshold increases. However, all variants of the NHC reach a higher accuracy for higher confidence thresholds. Also, they show a higher initial increase than the ABC as soon as the confidence threshold is greater than zero. This shows that a significant fraction of the data points that are classified with \(\xi=0\) are actually misclassifications. Once these data points are disregarded the accuracy increases notably and keeps climbing monotonously under increased confidence thresholds. For higher confidence thresholds the NHC also achieves a higher accuracy, meaning less data points with perfect confidence are misclassified than for the ABC. All in all, using the NHC with \(\lambda=0.4\) leads to the best performance. ### _Out-of-Domain Data_ In Fig. 6 the second comparison is done by visualizing the distribution of the confidence when classifying data points of unknown classes, similar to Fig. 3(b). The OOD dataset is used which consists only of Chinese traffic signs that the TSR system trained on the GTSRB dataset cannot correctly classify. Hence, the ideal alternative is to have a low confidence for every classification and in the best case the confidence is always zero. A corresponding behavior can be observed for both confidence metrics. For the NHC it shows that the strength impacts the overall confidence level. Higher strengths lead to a decreased confidence on most data points which is Fig. 4: Comparison of the NHC for different numbers of noise samples \(N\) and strength \(\lambda=0.2\) Fig. 5: Comparison of the NHC with different strengths \(\lambda\) and the ABC based on the accuracy under consideration of confidence thresholds on the synthetic dataset intuitive. The previously best version with \(\lambda=0.4\) performs on par with the ABC and is only slightly outperformed using \(\lambda=0.5\). ### _Adversarial Attacks_ Lastly, we compare the impact of an adversary on the ABC and NHC in Fig. 7. The synthetic dataset is used again, and the confidence is evaluated while increasingly severe PGD attacks [27] are performed on the TSR system. Here, \(\epsilon\) denotes the severity of the adversary in terms of \(\ell_{\infty}\) norm and \(\epsilon=0\) means no adversary is present. Therefore, this is equivalent to the setting of standard classification. The first observation is that all confidence metrics successfully decrease the confidence as soon as the adversary is introduced. However, for the lowest strength \(\lambda=0.3\) the confidence begins to significantly increase again once the severity of the adversary is further increased. In this case, a more severe adversary can reduce the impact of the NHC because the increased severity of the adversarial perturbation pushes the data points further into the decision region of the target class of the adversary. Using higher strengths for the neighborhood sampling prevents this effect, since the check for decision boundaries is performed at a greater distance. Another interesting point for observation is the value of the confidence for \(\epsilon=0\). Here, no adversary exists meaning the resulting values are the mean confidence on the unperturbed synthetic dataset. Intuitively, the mean confidence decreases when \(\lambda\) is increased for the NHC. However, the value is also rather low for the ABC. This means that some variants assign a low confidence to most of the classifications. The general confidence level is sometimes low which harms the ability to correctly distinguish between benign and harmful data points when the difference to the confidence level under influence of an adversary is too low. In section V we further elaborate on this behavior and the origin. Similar to Fig. 5 the NHC with \(\lambda=0.4\) achieves the best results because the tradeoff is optimized when considering the standard mean confidence and the meaningful confidence decrease under the influence of the adversary. This version can detect if a significant change in the distribution of the data points exists and reflects this change in the confidence. It performs best (or close to best) on all experiments showing that a single optimal strength value can be selected which allows the efficient usage of the NHC in real applications. ## V Discussion Our experiments show that for the considered low data regimes the additionally available information used in a strong gradient-based white-box method cannot be exploited and provide no benefit over the neighborhood confidence. Instead, drawing from a Rademacher distribution provides better confidence estimates in most considered cases. This is promising for the application in AD/ADAS since less complex methods are needed to comply with the strict timing requirements. Our results in subsection IV-E show that the general confidence level is rather low for some evaluated variants also on unperturbed data. The use of the synthetic dataset represents a small in-domain distribution shift that causes the data points to spread more over a decision region and lie closer to a decision boundary. In some cases the data points also lie in a different decision region since the standard accuracy drops from \(\approx 99.3\,\%\) on the original GTSRB test dataset to \(\approx 92.9\,\%\) on our synthetic dataset (see Fig. 4a or Fig. 5). The reliability of the classification is reduced which is reflected in all evaluated confidence metrics. However, for some variants the confidence level on benign data points under this distribution shift is rather low and one might want to increase the confidence gap to actually perturbed and harmful data points that are important to distinguish. To accomplish this the integration of concepts for calibration [16] in online confidence metrics seems promising. It is interesting to explore the calibration of online metrics for confidence estimation depending on the current data distribution observed in past data points during inference. Finally, we like to point out that it is in principle possible to combine the NHC with training methods for an improved confidence estimation. One could explore whether the use of augmentation methods during training, like AugMix [28], has an impact on the confidence estimation. In our preliminary Fig. 6: Comparison of the NHC with different strengths \(\lambda\) and the ABC based on empirical CDFs on the OOD dataset Fig. 7: Comparison of the NHC with different strengths \(\lambda\) and the ABC under the influence of a PGD adversary experiments strong augmentation during training led to larger and more robust decision regions. This mainly improves the behavior of all evaluated confidence metrics on unperturbed data by increasing the average confidence, while keeping the strong performance on other distribution types. Similarly, the interaction of the NHC with adversarial training [27] merits a detailed investigation because adversarial training leads to increased and homogeneous decision regions around the training data samples. ## VI Conclusion We introduce the neighborhood confidence for online black-box confidence estimation of DNNs motivated by searching the neighborhood of a data point for different decision boundaries. No information of the DNN is required and only the top-1 class output is used, which is the minimum possible output of a DNN. This allows to use the NHC to assess the classification reliability of externally supplied components. The performance of the NHC is evaluated for different data distribution types deviating from the training data distribution allowing only strictly limited additional samples for inference, as required for AD/ADAS. In this low data regime, the NHC performs better or similar to the most comparable method from the literature, even though this attribution-based confidence requires white-box access to the DNN.
2307.12906
QAmplifyNet: Pushing the Boundaries of Supply Chain Backorder Prediction Using Interpretable Hybrid Quantum-Classical Neural Network
Supply chain management relies on accurate backorder prediction for optimizing inventory control, reducing costs, and enhancing customer satisfaction. However, traditional machine-learning models struggle with large-scale datasets and complex relationships, hindering real-world data collection. This research introduces a novel methodological framework for supply chain backorder prediction, addressing the challenge of handling large datasets. Our proposed model, QAmplifyNet, employs quantum-inspired techniques within a quantum-classical neural network to predict backorders effectively on short and imbalanced datasets. Experimental evaluations on a benchmark dataset demonstrate QAmplifyNet's superiority over classical models, quantum ensembles, quantum neural networks, and deep reinforcement learning. Its proficiency in handling short, imbalanced datasets makes it an ideal solution for supply chain management. To enhance model interpretability, we use Explainable Artificial Intelligence techniques. Practical implications include improved inventory control, reduced backorders, and enhanced operational efficiency. QAmplifyNet seamlessly integrates into real-world supply chain management systems, enabling proactive decision-making and efficient resource allocation. Future work involves exploring additional quantum-inspired techniques, expanding the dataset, and investigating other supply chain applications. This research unlocks the potential of quantum computing in supply chain optimization and paves the way for further exploration of quantum-inspired machine learning models in supply chain management. Our framework and QAmplifyNet model offer a breakthrough approach to supply chain backorder prediction, providing superior performance and opening new avenues for leveraging quantum-inspired techniques in supply chain management.
Md Abrar Jahin, Md Sakib Hossain Shovon, Md. Saiful Islam, Jungpil Shin, M. F. Mridha, Yuichi Okuyama
2023-07-24T15:59:36Z
http://arxiv.org/abs/2307.12906v2
QAmplifyNet: Pushing the Boundaries of Supply Chain Backorder Prediction Using Interpretable Hybrid Quantum-Classical Neural Network ###### Abstract Supply chain management relies on accurate backorder prediction for optimizing inventory control, reducing costs, and enhancing customer satisfaction. Traditional machine-learning models struggle with large-scale datasets and complex relationships. This research introduces a novel methodological framework for supply chain backorder prediction, addressing the challenge of collecting large real-world datasets. Our proposed model demonstrates remarkable accuracy in predicting backorders on short and imbalanced datasets. We capture intricate patterns and dependencies by leveraging quantum-inspired techniques within the quantum-classical neural network QAmplifyNet. Experimental evaluations on a benchmark dataset establish QAmplifyNet's superiority over eight classical models, three classically stacked quantum ensembles, five quantum neural networks, and a deep reinforcement learning model. Its ability to handle short, imbalanced datasets makes it ideal for supply chain management. We evaluate seven preprocessing techniques, selecting the best one based on Logistic Regression's performance on each preprocessed dataset. The model's interpretability is enhanced using Explainable Artificial Intelligence techniques. Practical implications include improved inventory control, reduced backorders, and enhanced operational efficiency. QAmplifyNet seamlessly integrates into real-world supply chain management systems, empowering proactive decision-making and efficient resource allocation. Future work involves exploring additional quantum-inspired techniques, expanding the dataset, and investigating other supply chain applications. This research unlocks the potential of quantum computing in supply chain optimization and paves the way for further exploration of quantum-inspired machine learning models in supply chain management. Our framework and QAmplifyNet model offer a breakthrough approach to supply chain backorder prediction, offering superior performance and opening new avenues for leveraging quantum-inspired techniques in supply chain management. ## Introduction Supply chain management (SCM) plays a critical role in ensuring the smooth flow of goods and services from manufacturers to end consumers. In this context, accurate prediction of backorders, which refers to unfulfilled customer orders due to temporary stockouts, is of paramount importance. Supply chain backorder prediction enables proactive inventory management, efficient resource allocation, and enhanced customer satisfaction. It assists in mitigating the negative impacts of stockouts, such as lost sales, decreased customer loyalty, and disrupted production schedules. Predicting backorders for products in the future is challenging, mainly because the demand for a particular product can fluctuate unexpectedly. To develop an accurate predictive model, it is crucial to have an adequate amount of training data derived from the inventory tracking system. This data allows the model to learn the patterns that indicate whether a product will likely be backordered. However, a significant challenge in building such a model is the inherent imbalance in the dataset. The number of samples where a product is backordered is much lower than those where products are not backordered. This class imbalance creates a skewed dataset, which can negatively impact the model's performance. There needs to be more research available on product backordering, specifically addressing the challenges of class imbalance [10, 17]. However, extensive work has been conducted in the past to optimize inventory management. Inventory managers encounter various challenges when faced with material shortages, which can result in complete backlogs or lost orders. Previous literature has categorized material backordering as fixed, partial, or time-weighted backorders [46]. Customers' willingness to wait for a replenished stock depends on factors such as supplier reputation, recency of the backorder placement, and waiting time. Some customers may be patient and wait, while others may seek alternative options due to impatience. In such cases, the supplier experiences sales loss and missed revenue opportunities, leading to customer dissatisfaction and potential doubts about the supplier's inventory management capabilities. Traditional prediction models, predominantly based on classical machine learning (CML) algorithms, have been widely utilized for backorder prediction. However, these models face several challenges when dealing with large-scale datasets typically encountered in supply chain applications. Traditional models often need help handling these datasets' complexity and dimensionality, leading to suboptimal performance and limited scalability. Moreover, the ability to capture intricate patterns and dependencies within the data is crucial for accurate prediction, which remains a challenge for conventional approaches. Despite the widespread use of CML models, tuning millions of hyperparameters during training CML models like DNNs needs a significant amount of computing power. The fast-rising data volume required for training, particularly in the post-Moore's Law era, exceeds the limit of semiconductor production technology, which limits the field's advancement. On the other hand, quantum computing (QC) has proven to be more effective at solving issues that are insurmountable for conventional computers, such as factoring big numbers and doing unstructured database searches [18]. Nevertheless, because of the noise produced by the quantum gates and the absence of quantum error correction on Noisy Intermediate Scale Quantum (NISQ) devices, QC with substantial circuit depth faces significant difficulties. So, creating quantum algorithms with a reasonable level of noise-resistant circuit depth would be of fundamental relevance. The performance of CML models is now being outperformed by quantum machine learning (QML), which is based on variational quantum circuits [4]. The vastly decreased number of model parameters is one of the key advantages of variational quantum models over their classical counterparts. As a result, variational quantum models reduce the overfitting issues related to CML. Moreover, under some circumstances, they may learn more quickly or attain better test accuracy compared to their conventional counterparts [9]. The variational quantum model plays a vital role as the quantum component of a modern QML architecture, with the circuit parameters being updated by a classical computer [33]. The emergence of quantum-inspired techniques has opened up new avenues for addressing the limitations of CMLs. These techniques, inspired by QC principles, leverage the inherent parallelism and quantum-inspired optimization algorithms to enhance predictive capabilities. QML models exhibit promising potential in handling large-scale datasets, capturing complex patterns, and improving prediction accuracy in various domains. Supply chain backorder prediction (SCBP) can benefit from enhanced model performance, improved accuracy, and more efficient resource allocation by harnessing the power of quantum-inspired techniques. The utilization of QML algorithms can enable the identification of intricate relationships between variables, facilitating more accurate prediction of backorder occurrences. Consequently, these techniques have the potential to optimize SCM, minimize stockouts, reduce costs, and enhance customer satisfaction. In light of the limitations of traditional ML models and the potential advantages offered by quantum-inspired techniques, this research aims to develop a novel Hybrid Quantum-Classical Keras Neural Network (NN) for SCBP. The proposed model combines the flexibility and interpretability of Keras NNs with quantum-inspired optimization algorithms to overcome the limitations of classical approaches. By integrating quantum-inspired techniques into the prediction process, we anticipate achieving improved accuracy, robustness, and scalability in SCBP. The novelty of this research lies in applying QML techniques to the SCBP field. To the best of our knowledge, this study represents the first-ever QML implementation in the SCBP context. By introducing QML to this domain, we aim to explore the potential benefits and advancements that quantum-inspired techniques can bring to SCM. This research contributes to the field of SCM by exploring the potential of QML techniques for accurate and efficient backorder prediction. A novel hybrid Quantum-Classical Neural Network (Q-CNN) was developed as part of this study, combining the strengths of parallel-processed NN computing and quantum physics. Hybrid classical-quantum computing is a computational paradigm that combines classical infrastructure with quantum computers to address specific problem domains. In this approach, classical computers play a crucial role in pre-processing data, controlling quantum machines during computation, and post-processing the results obtained from quantum computations. By harnessing quantum phenomena such as entanglement and superposition, quantum computers possess the ability to perform parallel processing in a manner unprecedented by classical computers. By leveraging the strengths of both classical and quantum computing, hybrid systems enable the utilization of quantum resources while utilizing classical algorithms and techniques to enhance overall computational performance. This synergistic combination allows for the efficient utilization of quantum resources and the effective integration of classical and quantum computing capabilities to tackle complex problems. The hybrid algorithms employed in this study outperformed their classical counterparts by leveraging quantum and classical computing capabilities. In light of these considerations, this research provides a novel and thorough methodology for anticipating inventory backorders. The goal is to maximize profits while minimizing costs related to product backorders, maintaining good relationships with suppliers and customers, and preventing sales from being lost. Customers and businesses alike can profit from precise projections of future backorders for individual products with the help of a well-developed predictive model. A current topic of research is the simplification of quantum algorithms for usage with NISQ computers [38]. Quantum algorithms that scale well may be efficiently executed on computers that use photons, superconductors, or trapped ions [19, 21, 43]. Particularly exciting is QML because of its compatibility with current NISQ designs [20, 42]. Predicting product backorders, for example, requires access to massive amounts of data, which is a strength of traditional ML algorithms. For this reason, this research introduces a novel Q-CNN model that can deal with data imbalances even when trained on a small dataset. NISQ devices are effective in running shallow-depth algorithms requiring a few qubits [38]. Given the specific difficulties and prerequisites of product backordering prediction, it becomes sensitive to take advantage of QML run on NISQ devices by means of the SCBP dataset. Classification in inspection tests for small-size datasets was made possible by searching for a quantum advantage on the classifier [48]. The open-access Kaggle dataset used in this research was gathered from an 8-week inventory management system [10]. Unfortunately, as shown in Figure 1, the dataset needs to be balanced because the number of backordered items is disproportionately high (137:1). Figure 2 shows a dataset heatmap, indicating that high feature correlations are required, increasing the difficulty of working with the dataset. The issue is made more complicated by the fact that any prediction model will need help dealing with imbalanced datasets [27]. Gradient boosting model (GBM), random forest (RF), and logistic regression (LR) are only some of the traditional ML methods that have been presented for similar jobs in the past [10, 17]. It has also been common practice to use undersampling and oversampling strategies to rectify grossly unbalanced business statistics [13, 22]. This research presents an innovative approach for SCBP that incorporates effective preprocessing methods, resulting in a novel quantum-classical ML-based prediction model. There are various steps in the methodological flowchart. We first preprocess each SKU's features using seven possible combinations of methods. We benchmark each preprocessed dataset by applying the LR model and then select the most effective preprocessing technique based on its accuracy. The selected preprocessing tasks involve converting categorical features into numerical features, handling missing values, log transforming numerical features, normalizing feature values within a specific range, and dropping redundant numerical features using variation inflation factor (VIF) treatment. In this classification problem, there are substantially fewer positive samples (backordered) than negative samples (non-backordered). Consequently, we address the issue of class imbalance by employing an undersampling technique called NearMiss. We choose undersampling instead of oversampling because QML models struggle to train on large datasets compared to CML models. Furthermore, we utilize principal component analysis (PCA) to extract four input principal components from the preprocessed dataset. These components capture the most significant features for prediction. Finally, we propose our hybrid Q-CNN model, named QAmplifyNet, which incorporates key aspects of the architecture in its mnemonic name. The "Q" signifies the utilization of QC principles, highlighting the model's quantum component. "Amplify" represents the concept of amplifying information through the model's layers. Lastly, "Net" refers to the NN nature of the model, incorporating both classical and QML components. For the performance evaluation of our model, we compare it against eight commonly used CML models, one deep reinforcement learning (RL) model, five quantum NNs, and three quantum-classical stacked ensembles. Through this comprehensive comparison, we aim to demonstrate the superiority and robustness of our proposed QAmplifyNet model for SCBP on short datasets. Despite the excellent accuracy of CML models on this complete dataset, our proposed QAmplifyNet model holds significant value. It showcases remarkable performance on short, imbalanced data, which is a common challenge in SC inventory management. Additionally, the application of QML in this domain represents a pioneering effort, making it the first-ever QML application in the supply chain inventory management field. Using a benchmark SCBP dataset titled _"Can you predict product backorder?"_, we run tests utilizing the proposed model. The experimental findings demonstrate the higher performance of our technique in SCBP, as evaluated by accuracy and area under the receiver operating characteristic (ROC) curve. Moreover, we compare our models to well-known classification models and come to the conclusion that our strategy performs noticeably better than other comparable models. In summary, this paper makes eight-fold contributions: 1. This study represents the pioneering application of QC in the SCM domain. 2. We introduce a novel theoretical framework for predicting inventory backorders. 3. We present a comprehensive data preprocessing technique that combines log transformation, normalization, VIF treatment, and NearMiss undersampling to address the imbalanced nature of the dataset in the rare SCM domain. 4. We propose a hybrid quantum-classical Keras NN-based technique for forecasting product backorders, enhancing the suppliers' overall efficiency. 5. We demonstrate that the hybrid Q-CNN model overcomes the challenge of limited availability of large SCM datasets by showcasing its enhanced performance compared to CML and QML models on short datasets with few features. 6. We enhance the interpretability of the proposed model by implementing Explainable Artificial Intelligence (XAI) methods, specifically SHAP and LIME. 7. Our novel methodology significantly improves prediction accuracy, reducing misclassification errors, especially false positives and false negatives, and ultimately increasing enterprise profitability. 8. Lastly, we discuss how the proposed methodology can be applied homogeneously to other supervised binary classification domains, such as predicting credit card defaults. Here's how the paper breaks down: the current literature on SCBP, CML models, quantum-inspired models, and RL-based techniques are reviewed in Section "Related Work". It draws attention to the unanswered questions that our proposed model seeks to answer. Section "Methods" introduces the proposed hybrid Q-CNN-based backorder prediction system to address class imbalance on the short dataset. It describes the selected preprocessing steps followed by the architectures and working principles of the models used in this paper. In Section "Results", we use the experimental data and robustness tests to evaluate, compare, and verify the effectiveness of the proposed model. In Section "Discussion", we conclude and make comparisons Figure 1: Barplots showing the distribution of null values in the dataset (a) before and (b) after removal. The top of each bar shows the number of samples present in each feature. between the proposed model and alternative methods. Possible real-world SCM implementations of the suggested method are also discussed. The report finishes with a summary of the main findings and the main contributions in Section "Conclusion and Future Work". Some future research directions in SCBP employing quantum-inspired approaches are also presented. ## Related Work In the field of scientific research on inventory management, various studies have been conducted to improve forecasting and decision-making related to backorders [31]. proposed a solution based on Markov decision processes to define inventory and backorder strategies. They treated production system yield as a stochastic process [53]. examined a stock inventory system incorporating periodic reviews and a partial backorder forecast. They developed a framework considering the distribution of demand and its factors to assess uncertainty in the inventory system [37]. analyzed estimating errors and derived an inventory model's predicted lead time demand distribution. This distribution could be used to optimize inventory management [7]. determined ordering policies in inventory systems using RL. They viewed SCM as a multi-agent system and utilized the Q-learning technique to solve the model [1]. combined the N-retailer problem and overall cost considerations to develop an objective function for ordering, storing, and backordering in a single inventory. They optimized three decisions jointly: lot sizing, routing and distribution, and replenishment. Figure 2: Spearman correlation heatmap to analyze the relationship between the target feature ’went_on_backorder’ and 14 preprocessed input features of the SCBP dataset. maximize net present value [12]. emphasized the importance of incorporating backorder decisions and costs into an ideal inventory policy, noting that previous models often overlooked SCBP [5] introduced fuzzy number-based optimization models to account for uncertain demand and lead times, outperforming conventional methods [24]. presented a fuzzy model that included human reasoning with backorders, while [29] and [47] constructed economic ordering quantity models with various factors such as special sale pricing, poor quality, partial backordering, and quantity discounts [26]. proposed an integrated inventory model that optimized multiple decisions simultaneously, including lead time, lot size, number of shipments, and safety factor. In a fuzzy condition [23], analyzed a warehouse model incorporating backorders using fuzzy numbers and a graded mean integration model [16]. optimized spare component allocation decisions for serviceable standalone systems with dependent backorders [11]. devised an approach to forecasting order for line-replaceable unit components with backorders, highlighting the need to consider the dynamic features of these factors [45]. proposed a framework to reduce overall costs and anticipated risk costs of backorder replenishment plans using a Bayesian belief network [52]. investigated a dynamic rationing scheme that considered demand dynamics, while [2] used a Markov decision support system to determine the best rationing levels across all categories of demand [49, 50]. developed non-parametric and parametric prediction models, such as kernel density and GARCH algorithms, to predict safety stock and Figure 3: Methodological framework illustrating (a) data sources, (b) data collection and splitting, (c) data preprocessing, and (d) proposed Q-CNN model development for SCBP. reduce long lead times [10]. proposed ensemble-based machine learning algorithms, GBM and RF paired with undersampling, for SCBP [17]. discussed the benefits and limitations of ensemble prediction methods and undersampling in dealing with noisy data and improving prediction accuracy [30]. improved SCBP with the use of the Conditional Wasserstein Generative Adversarial Network (CWGAN) model along with Randomized Undersampling (RUS). Initially, the majority of the non-backorder samples were reduced using RUS. Second, CWGAN was used as a technique for oversampling to provide superior backorder samples. Ultimately, RF was implemented to predict backorders. The class imbalance problem was successfully addressed by [44] densely linked DNNs, which combined SMOTE and randomized undersampling. The experimental outcomes indicate better prediction performance and predicted profit on a thorough product backordering dataset, proving the proposed model's superiority over existing ML approaches. In handling noisy data and minimizing overfitting, ensemble forecasting models have shown superior to non-parametric and parametric forecasting methods. However, their computational efficiency becomes a limitation when analyzing large warehouse datasets in real time, limiting their practical utility. On the other hand, undersampling techniques can enhance computational performance, but they may also exclude potentially valuable training data and compromise prediction accuracy. To address these challenges, we propose a hybrid Q-CNN applied to a short backorder dataset; our preprocessing approach involves several steps. Firstly, we apply a log transform to the data, followed by standard scaling to normalize the features. We also address multicollinearity issues by implementing Variable Inflation Factor (VIF) treatment. With the training dataset being unbalanced, we employ the NearMiss undersampling method, which involves deliberately reducing the majority of class occurrences. The choice of a hybrid Q-CNN for analyzing the short dataset stems from its unique advantages. Combining classical and QC techniques, this approach harnesses the power of quantum algorithms for specific tasks while leveraging the robustness and versatility of CML frameworks like Keras. Exploiting quantum principles like superposition and entanglement through the use of quantum-inspired algorithms inside a classical NN framework can result in efficient and more accurate calculations. Compared to purely classical or purely quantum models, the hybrid Q-CNN is anticipated to outperform in several aspects. Firstly, the combination of classical and quantum techniques enables more powerful computations, leading to increased accuracy in backorder predictions. The utilization of quantum-inspired algorithms within the classical framework allows for more efficient exploration of the solution space and better identification of patterns and trends in the data. The hybrid approach offers practical advantages over pure quantum models. Quantum computers are still in the early stages of development, and their availability and scalability may pose limitations in real-world applications. The hybrid model can leverage existing computational resources and infrastructures by integrating CML frameworks like Keras, making it more accessible and practical for implementation in real-world SC environments. This integration allows for more accurate predictions, improved decision-making, and better inventory control, making it a promising approach for addressing the challenges of backorder management in real-world contexts. Our study focused on analyzing a short and imbalanced dataset obtained by undersampling a larger dataset. We aimed to benchmark our proposed hybrid Q-CNN against CML, QML, and RL models. In working with a short and imbalanced dataset, our hybrid model showcased its strength and outperformed the CML, QML, and RL models. It is essential to emphasize that our hybrid model's superior performance on this particular short and imbalanced dataset highlights its effectiveness in addressing the specific challenges associated with such data characteristics. This milestone underscores the practicality and utility of the hybrid Q-CNN in real-world scenarios where acquiring large datasets may be difficult, yet accurate predictions are crucial. Our findings have implications for domains with similar short and imbalanced datasets. The success of our proposed model indicates its practicality and usefulness in situations where obtaining extensive datasets is challenging, but accurate predictions are of paramount importance. ## Methods ### Data Collection We used a benchmarking dataset called _"Can you predict product backorder?"_ obtained from the Kaggle data repository to conduct extensive experiments on our proposed hybrid Q-CNN-based prediction model. The data was gathered from the Kaggle repository. There are many orders for various products included in the dataset. A total of 22 features characterize the eight-week trajectory of each order, and a target binary feature denotes if the corresponding product is a backorder or not. Table 1 summarizes the features. As product backorder is not typical, it was important to distribute the classes in this dataset evenly. Only 13,981 orders (0.72%) for products were delayed out of a total of 1,929,935. There were 1,915,954 negative cases (99.28%) found as positive ones. Figure 4 shows the dataset's class distribution. This dataset has an imbalance ratio of 1:137, making it extremely unbalanced. Using stratified k-fold cross-validation with five splits and no shuffling, we divided the dataset into training and testing sets while maintaining the imbalance ratio. ### Data Preprocessing Data preprocessing is a crucial step in enhancing the performance of ML models. Our preprocessing approach initially focused on identifying and addressing irrelevant data points. For instance, we observed that variables like perf_6_month_avg and perf_12_month_avg contained negative values deemed inconsistent and removed. We encountered features that included the symbol '?' indicating missing values, which were also eliminated. Furthermore, we transformed categorical features into binary numerical representations to facilitate analysis, separating them from the original dataset for subsequent processing and analysis. For instance, the value of certain features containing either 'yes' or 'no' were converted into binary 1 and 0, respectively. We tried seven different combinations of preprocessing steps and tested LR on each preprocessed data to evaluate and choose the best preprocessing step for our model development. Seven alternative techniques are as follows: 1. _IFLOF:_ We removed anomalies from the dataset using Isolation Forest Local Outlier Factor (IFLOF), which is a method that combines the Isolation Forest algorithm with the Local Outlier Factor algorithm. IFLOF identifies outliers by constructing an ensemble of isolation trees and measuring the local outlier factor for each data point. It provides a measure of abnormality for each instance in the dataset. 2. _IFLOF+VIF:_ This preprocessing step combines IFLOF outlier detection with the VIF. VIF is a measure of multicollinearity, which assesses the correlation between predictor variables in a regression model. Applying IFLOF to identify outliers and then using VIF to identify highly correlated variables helps address outlier detection and multicollinearity issues. 3. _IQR+VIF:_ We applied the Interquartile Range (IQR), a statistical dispersion measure, to identify outliers and then applied the VIF to detect and remove multicollinearity. 4. _VIF:_ We only applied VIF in this method without using any log transformation, standard scaling, or anomaly detection algorithm. 5. _No log transform+VIF:_ This preprocessing step involves applying VIF to the dataset without performing a log transform on the variables. This method allows for the detection of multicollinearity without the influence of log transformations. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Features** & **Notation** & **Description** \\ \hline sku (Stock Keeping Unit) & x30 & A distinctive identifier for each instance in the dataset. \\ \hline national\_inv & x1 & Current level of inventory of the product. \\ \hline lead time & x2 & Time taken for a shipment to be delivered from its starting point to the final destination. \\ \hline in\_transist\_qty & x3 & This quantity is calculated based on the most recent picking slip or the cumulative \\ & & quantity, and it represents the amount of product currently in transit. \\ \hline forecast\_3\_months, forecast\_6\_months, forecast\_9\_months & x4, x5, x6 & Sales forecasts for the product over the following three, six, and nine months, respectively. \\ \hline sales\_1\_month, Sales\_3\_month, sales\_6\_month, sales\_9\_month & x7, x8, x9, x10 & Sales quantity of the product in the last one, three, six, and nine months, respectively. \\ \hline min\_bank & x11 & Minimum recommended stocking level for the product. \\ \hline potential\_issue & x12 & Any problems or issues associated with the product or its parts. \\ \hline pieces\_past\_due & x13 & Quantity of overdue parts for the product, if any. \\ \hline perf\_6\_months\_avg, perf\_12\_months\_avg & x14, x15 & Average performance of the product over the past six months and twelve months, respectively. \\ \hline local\_bo\_qty & x16 & Amount of stock orders that are currently overdue. \\ \hline p\_pay\_risk, deck\_risk, stop\_auto\_buy, oe\_constraint, rev\_stop & x17 \(-\)x21 & Binary flags (yes or no) associated with specific risks or constraints related to the products. \\ \hline went\_to\_backorder & y & Target variable by which the status of the product’s backorder is indicated. \\ \hline \end{tabular} \end{table} Table 1: Descriptions of the dataset features of a particular product order. Figure 4: Class distribution of the imbalanced dataset used in our study. 6. _RobScaler+VIF:_ In this alternative, we tried RobScaler, a method used for robust feature scaling to the dataset, before using VIF to detect multicollinearity. RobScaler is particularly useful when dealing with data that contains outliers, as it scales the features by removing the median and scaling them according to the interquartile range. 7. _Log transform+StandardScaler+VIF:_ This proposed preprocessing step involves three stages. First, a log transform is applied to the variables, which can help normalize skewed data and handle nonlinear relationships. We removed the infinity values from the resulting data. Then, the StandardScaler is used for feature scaling, which ensures that all variables have a mean and standard deviation of 0 and 1, respectively. Finally, the VIF with threshold = 5 is applied to detect multicollinearity in the transformed and scaled dataset. This method aims to handle skewness, standardize the data, and identify multicollinearity simultaneously. We selected a subset of 14 features (x1-x4, x10-x13, x15, x16, x17-x20) from the dataset in this technique. We made this selection by excluding the remaining features due to the existence of multicollinearity among them. To choose the best preprocessing method, we undersampled the dataset to make it balanced, maintaining the majority-to-minority ratio of 3:1 using the NearMiss algorithm. Compared to the other approaches, Log transform+StandardScaler+VIF produces the best ROC-AUC of 66% by the LR model (see Table 2). LR was chosen specifically to evaluate the performance of different preprocessing methods because it is a widely used and well-established classification algorithm. By applying various preprocessing methods and evaluating their effects on LR's performance, we gain valuable insights into which techniques are most effective in improving the predictive capabilities of LR. After selecting the best preprocessing technique, we finally balanced and shortened the preprocessed dataset using the NearMiss algorithm, which was further fed as the input data for all the models used in this study. The input training data has 1000 samples having a 1:1 majority-to-minority class ratio. The test data was intentionally made imbalanced using the undersampling majority-to-minority ratio of 3:1, having 267 samples, among which 67 went backorder, and the rest did not. ### Classical Models We implemented 8 CML models using the scikit-learn[36] library, which provides a comprehensive set of tools for ML tasks in Python. Additionally, the parallel-computing library Dask[40] was utilized to enhance the efficiency and scalability of these algorithms. It enabled the distribution and execution of computations across multiple processors or machines, allowing for faster processing. We performed hyperparameter tuning using GridSearchCV with a 3-fold cross-validation to identify the optimal hyperparameters for each CML, as shown in Table 3. The CML models used include Categorical Boosting (Catboost), Light Gradient Boosting Machine (LGBM), Random Forest (RF), Extreme Gradient Boosting (XGBoost), Artificial Neural Network (ANN), K-Nearest Neighbors (KNN), Support Vector Machines (SVM), and Decision Tree (DT). The classical ANN architecture employed in this study consists of an input layer with 14 neurons, followed by two dense layers with 14 and 10 neurons, respectively. ### Stacked Ensemble Models Using qiskit[39] and qiskit_machine_learning modules, we explore the following classically-stacked quantum ensemble algorithm: 1. The base classifiers are trained using the provided training data. 2. The trained base classifiers are then used to make predictions on both train and test datasets. 3. The output labels generated by the base classifiers on the training and testing data are appended as additional features to the original training and testing datasets. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline \multicolumn{1}{|l|}{**Performance Metrics**} & \multicolumn{4}{c|}{**Preprocessing Techniques**} \\ \hline \multicolumn{1}{|l|}{} & \multicolumn{2}{c|}{**PLOF+VIF**} & \multicolumn{2}{c|}{**IQR+VIF**} & \multicolumn{2}{c|}{**VIF**} & \multicolumn{2}{c|}{**No log transform+VIF**} & \multicolumn{2}{c|}{**RobScaler+VIF**} & \multicolumn{2}{c|}{**Log transform+StandardScaler+VIF (Proposed)**} \\ \hline \multirow{3}{*}{Net Backorder (0)} & Precision & 90\% & 99\% & 99\% & 100\% & 100\% & 99\% & 100\% \\ \cline{2-9} & Recall & 17\% & 19\% & 21\% & 40\% & 58\% & 100\% & 59\% \\ \cline{2-9} & P1-score & 29\% & 31\% & 35\% & 58\% & 73\% & 100\% & 74\% \\ \hline \multirow{3}{*}{Backorder (1)} & Precision & 1\% & 1\% & 1\% & 1\% & 0\% & 1\% \\ \cline{2-9} & Recall & 84\% & 82\% & 79\% & 88\% & 62\% & 0\% & 72\% \\ \cline{2-9} & P1-score & 1\% & 1\% & 2\% & 2\% & 2\% & 0\% & 2\% \\ \hline Accuracy & 17\% & 19\% & 22\% & 41\% & 58\% & 99\% & 60\% \\ \hline ROC AUC & 50\% & 50\% & 50\% & 64\% & 60\% & 50\% & **66\%** \\ \hline Micro average precision & 50\% & 50\% & 50\% & 50\% & 50\% & 50\% & 50\% \\ \hline Micro average recall & 50\% & 50\% & 50\% & 50\% & 60\% & 50\% & 66\% \\ \hline \end{tabular} \end{table} Table 2: Performance evaluation of LR model on the undersampled data with different preprocessing techniques compared in this study \begin{table} \begin{tabular}{|l|l|} \hline **Models** & **Best hyperparameters** \\ \hline & boost\_from\_average = False \\ \cline{2-3} & boosting\_type = ’Plain’ \\ \cline{2-3} & border\_count = 254 \\ \cline{2-3} & depth = 20 \\ \cline{2-3} & devices = ’0:1’ \\ \cline{2-3} & early\_stopping\_rounds = 500 \\ \cline{2-3} & eval\_metric = ’AUC’ \\ \cline{2-3} & feature\_border\_type = ’GreedyLogSum’ \\ \cline{2-3} & grow\_policy = ’Lossguide’ \\ \cline{2-3} & leaf\_estimation\_backtracking = ’Any Improvement’ \\ \cline{2-3} **Cathoost** & learning\_rate = 0.5 \\ \cline{2-3} & loss\_function = ’Logloss’ \\ \cline{2-3} & max\_leaves = 100 \\ \cline{2-3} & model\_size\_reg = 0.5 \\ \cline{2-3} & posterior\_sampling = False \\ \cline{2-3} & random\_seed = 786 \\ \cline{2-3} & random\_strength = 1 \\ \cline{2-3} & rsm = 1 \\ \cline{2-3} & scale\_pos\_weight = 3 \\ \cline{2-3} & score\_function = ’cosine’ \\ \cline{2-3} & sparse\_features\_conflict\_fraction = 0 \\ \cline{2-3} & task\_type = ’GPU’ \\ \hline & colsample\_bytree = 1.0 \\ \cline{2-3} & learning\_rate = 0.01 \\ \cline{2-3} **LGBM** & max\_depth = 5 \\ \cline{2-3} & n\_estimators = 500 \\ \cline{2-3} & subsample = 0.8 \\ \hline & bootstrap = True \\ \cline{2-3} & criterion = ’gini’ \\ \cline{2-3} & max\_depth = 30 \\ \cline{2-3} **RF** & max\_features = ’auto’ \\ \cline{2-3} & min\_samples\_leaf = 2 \\ \cline{2-3} & min\_samples\_split = 2 \\ \cline{2-3} & n\_estimators = 200 \\ \hline & learning\_rate = 0.3 \\ \cline{2-3} & max\_depth = 20 \\ \cline{2-3} & min\_child\_weight = 2 \\ \cline{2-3} & n\_estimators = 100 \\ \cline{2-3} & scale\_pos\_weight = 0.5 \\ \cline{2-3} & colsample\_bytree = 1 \\ \hline **KNN** & algorithm = ’ball\_tree’ \\ \cline{2-3} & n\_neighbors = 4 \\ \hline **SVM** & kernel = ’rbf’ \\ \cline{2-3} & C = 0.9 \\ \hline **DT** & min\_samples\_leaf = 6 \\ \cline{2-3} & criterion = ’gini’ \\ \cline{2-3} & max\_depth = 3 \\ \hline \end{tabular} \end{table} Table 3: Best hyperparameters selected by GridSearchCV for the CML models 4. Next, the meta-classifier is trained using the updated train data, and its performance is evaluated on the updated testing data to obtain the final prediction values. #### Qsvm+Lgbm+lr We initialize two base classifiers, namely Quantum Support Vector Machine (QSVM) and LGBM. For the QSVM classifier, we utilize the ZZFFeatureMap to calculate the kernel matrix. The computation of the kernel matrix is performed using the following equation: \[K(\vec{x}_{i},\vec{x}_{j})=K_{ij}=|\langle\phi^{+}(\vec{x}_{j})|\phi(\vec{x}_{ i})\rangle|^{2} \tag{1}\] where \(x_{i},x_{j}\in X\) (training dataset), and \(\phi\) represents the feature map. To simulate the results of the quantum computer, we employ the state vector simulator, which can be substituted with a backend for hardware results. We consider two base classifiers, with the second one being LGBM. For the ensemble construction, we utilize LR as the meta-classifier, which combines the predictions of the two base classifiers. #### Voc+OsuVm We used the ZZFFeatureMap to define the feature map for the Variational Quantum Classifier (VQC) as a base classifier and QSVM as a meta-classifier. The input data was mapped to a higher-dimensional quantum space using this feature map. For the VQC, we chose the TwoLocal ansatz, which involved the use of \(R_{Y}\) (Equation 6) and \(R_{Z}\) (Equation 7) gates for the parameterized rotations and the \(CZ\) (Equation 2) gate for entanglement. \[CZ=\begin{bmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&-1\end{bmatrix} \tag{2}\] This ansatz was repeated for two iterations. Then COBYLA optimizer and a QuantumInstance with the statevector_simulator backend were configured. The QSVM's kernel was initialized using the QuantumKernel, which employed the chosen feature map and a QuantumInstance with the statevector_simulator backend. #### Voc+Lgbm We utilized the previously mentioned initialization techniques for both the VQC and LGBM models, employing them as the base and meta classifiers. These models were then integrated into a stacking ensemble framework, where the predictions of the base classifiers were combined and used as features for the meta-classifier LGBM. ### Quantum Neural Network (QNN) Models We used Pennylane[3] dependencies for developing the QNN models. Pennylane is employed to simulate quantum circuits and conduct quantum experiments, facilitating the development of QC programs. #### Mera-Voc Our scheme has an ansatz based on a tensor network named Multi-scale Entanglement Renormalization Ansatz (MERA). With only 16 variables designed, the amplitude embedded one layer for each tensor network. We initialized the device as the QC backend with 4 qubits using Pennylane's QML library[3]. The entanglement structure for the MERA circuit was implemented using CNOT gates between the qubits 0 and 1, and two rotation gates, \(R_{Y}\), were applied to each qubit using the specified weights. The number of block wires, parameters per block, and the total number of blocks parameterized the MERA quantum circuit. Then a quantum circuit was implemented using the defined MERA structure to process the training data. We defined a VQC classifier that utilized the previously constructed quantum circuit. The classifier took weights, bias, and classical data as inputs and produced predictions based on the output of the circuit. We implemented a square loss function to measure the difference between the predicted and true labels and an accuracy function to assess the model's performance. We defined a cost function that was used to optimize the weights and bias parameters to enable the model to learn from the training dataset by quantifying the overall loss between the predictions and the true labels. #### Ry-Cnot-Voc RY-CNOT-VQC 6-layered classifier highlights the use of \(R_{Y}\) and CNOT gates in the circuit structure, providing more detailed information about the model's architecture. We employed a 2-qubit simulator to translate classical vectors into quantum state amplitudes. The circuit was encoded using the method described by[34]. Also, following the work of[35], we had to break down controlled Y-axis rotations into simpler circuits. The quantum state preparation process was defined using quantum gates such as \(R_{Y}\) (rotation around the y-axis), controlled-NOT (CNOT), and \(Pauli-X\) gates. The primary quantum circuit incorporates the state preparation process, applying multiple rotations layers based on the given weights. Then a function applies rotation gates on qubits 0 and 1 and performs a CNOT operation between them. The quantum circuit was evaluated on a test input by applying the state preparation process and estimating the expectation value of the \(Pauli-Z\) operator on qubit 0. #### Classical NN+Encoder+QNN As suggested in [25], this hybrid model is made up of a classical NN, an encoder circuit, and a QNN. There are two qumodes that make up the quantum circuit. Each vector entry was used as the parameter of available quantum gates to encode classical data into quantum states. Two 10-neuron hidden layers, each with an 'ELU' activation function and a 14-neuron output layer, comprise the classical NN. Then, 14 entries of the classical NN's output vectors are sent into squeezer, interferometers, displacement gates, and Kerr gates as input parameters. Kerr gates, Interferometer-1, interferometer-2, squeezers, and displacement gates were employed in the QNN's four-step sequence. Using the \(Pauli-X\) gate's \(\langle\phi_{k}|X|\phi_{k}\rangle\) expectation value, for the final state \(|\phi_{k}\rangle\) of each qumode, a two-element vector \([\langle\phi_{0}|X|\phi_{0}\rangle,\,\langle\phi_{1}|X|\phi_{1}\rangle]\) was constructed. The ROC value of this model is 71.09%, and the closest threshold to optimal ROC is 54%. ### Deep Reinforcement Learning Model We used TensorFlow 2.3+ [32] and TF Agents 0.6+ [15] to implement Double Deep Q-Network (DDQN) [28]. By treating the classification problem as an Imbalanced Classification Markov Decision Process, DDQN predicts that the episode will end when the agent misclassifies a sample from the minority-class but not a majority-class sample. The training process involved 100,000 episodes, and a replay memory was used with a length matching the number of warmup steps. Mini-batch training was performed with a batch size of 32, and the Q-network was updated using 2,000 steps of data collected during each episode. The policy was updated every 500 steps, and a soft update strategy was employed with a blending factor of 1 to update the target Q-network every 800 steps. The model architecture consisted of three dense layers with 256 units and ReLU activation, followed by dropout layers with a rate of 0.2. The final layer directly outputted the Q-values. Adam optimization was applied with a learning rate of 0.00025, and future rewards were not discounted. The exploration rate decayed from 1.0 to min_epsilon over \(\frac{1}{10}th\) of the total episodes, and the minimum and final chance of choosing a random action was set to 0.5. ### Proposed QAmplifyNet Model The provided Figure 3 presents an overview of our proposed methodological framework. The first phase in the framework is gathering baseline information, which may include supplier efficiency, lead times, inventory levels, and product sales. Information on sales, supplier efficiency, inventory levels, and lead times for suppliers is gathered from a wide variety of data sources. These data are then combined and grouped into weekly time intervals for orders. The dataset is subsequently divided into training and testing sets. The collected data undergoes preprocessing using our suggested 'Log transformation+Standard Scaling+VIF treatment' method to address the common anomalies found in manufacturing industrial sensor data. This involves eliminating inconsistent data points, managing null values, and scaling and normalizing the data within a specified range. We applied PCA on both the train and test datasets to prepare the input for our 2-qubit Amplitude Encoder, resulting in 4 features. This dimensionality choice aligns with the model's requirements, as it operates on \(\log_{2}4\), which yields a 2-dimensional classical data input. The aggregated data is then prepared for predictive analytics, employing a hybrid Q-CNN named QAmplifyNet as the core component of the proposed framework. The classical layers process the input data, while the quantum layer performs quantum computations on the encoded data. This comprehensive framework enables us to effectively leverage the collected data and utilize the hybrid model for analysis and prediction purposes. In our implementation, we leveraged the capabilities of PennyLane [3] to convert QNodes into Keras layers. This integration allowed us to combine these quantum layers with a diverse set of classical layers available in Keras, enabling the creation of genuinely hybrid models. Figure 5 explains the proposed architecture and summary of QAmplifyNet, which consists of a Keras Sequential model consisting of an input layer, three classical hidden layers, one quantum layer, and a classical output layer. Here is an explanation of each layer: 1. _Input Layer:_ The input layer accepts inputs from 4 PC features and comprises 4 neurons. Figure 5: Model architecture of QAmplifyNet model. 2. _Hidden Layer 1:_ This dense layer has 512 units with a ReLU activation function. It receives input from the input layer (with a dimension of 4) and is set as non-trainable, serving the purpose of embedding the input data. 3. _Hidden Layer 2:_ The second dense layer has 256 units with a ReLU activation function. It receives input from the previous layer and is non-trainable. 4. _Hidden Layer 3:_ This dense layer has 4 units with a ReLU activation function. It takes input from the previous layer and is non-trainable. It passes 4-dimensional outputs to the next quantum layer. 5. _QNN KerasLayer:_ The next layer incorporates a 2-qubit quantum node (QNode) and weight shapes. It represents the quantum part of the hybrid model and takes input from the previous dense layer, which receives these four-dimensional classical data as inputs and converts them into four amplitudes, representing a quantum state of two qubits. 6. _Output Layer:_ The final output probabilities are generated via a softmax activation function in the output dense layer. There are just two possible classes that need to be classified; hence this layer only has 2 units. The softmax activation function can be characterized as follows: \[\sigma(\vec{\theta})_{i}=\frac{e^{\theta_{i}}}{\sum_{j=1}^{K}e^{\theta_{j}}}\] (3) \[\begin{split} Where,&\\ \sigma&=\text{softmax function}\\ \vec{\theta}&=\text{input vector}\\ e^{\theta}&=\text{standard exponential function for input vector}\\ K&=\text{class count in multi-classifier}\end{split}\] Using a learning rate of 0.01 and a loss function of 'binary_crossentropy,' we employed the Adam optimizer. In the QAmplifyNet mode, we have implemented distinct classical and quantum parts that work together to form the overall architecture. Let's delve into the details of each part: #### Classical Part The classical part of the model primarily consists of classical layers that operate on classical data. In our specific implementation, we have used classical dense layers with various activation functions (e.g., ReLU) and configurations. These classical layers process the input data using classical computations, performing operations like linear transformations and nonlinear activations. Our model has three classical dense layers: Dense Layer 1, Dense Layer 2, and Dense Layer 3. These layers receive inputs from the previous layer and are set as non-trainable, as indicated by the 'trainable=False' parameter. The classical part culminates in Dense Layer 4, which has two units and employs the Softmax activation function for generating the final output probabilities. #### Quantum Part The quantum part of the model is integrated into the classical part using the 'qml.qnn.KerasLayer' from PennyLane3. This part includes the QNode, which represents the quantum circuit, and weight shapes that define the structure of the quantum operations. In our implementation, the QNode is defined, which consists of quantum operations from PennyLane's templates of 'Amplitude Embedding' (AE) and 'Strongly Entangling Layers' (SEL). Footnote 3: The classical part of the model is a quantum state of the model, which is a quantum state of the model. Classical data items must be embedded as quantum states on qubits for processing by a quantum computer due to the quantum nature of the computer's operation. In the circuit, the state preparation component, AE, is responsible for encoding classical data onto the two data qubits. The key advantage of AE is its ability to handle significantly large amounts of information with a relatively small number of qubits. With amplitude encoding, the number of amplitudes available is practically limitless, allowing for encoding a significant amount of data. Notably, the number of qubits required for encoding a given number of features follows a logarithmic relationship \((log_{2}(n))\), meaning that as the number of data features increases, only a logarithmic increase in the number of qubits is needed. This scalability enables encoding a vast amount of information with each additional qubit, making amplitude encoding a powerful approach for handling complex datasets in QC. The AE is composed of a parameterized quantum circuit comprising an embedding circuit and a variational circuit (see Figure 6). The embedding circuit incorporates an Amplitude Encoder, which is designed to encode a maximum of \(2^{n}\) data features into the amplitudes of a quantum state consisting of \(n\) qubits. Alternatively, a vector containing \(N\) number of features can be encoded using \([\log_{2}n]\) qubits. The amplitudes of a quantum state \(|\phi_{x}\rangle\) with \(n\) qubits can be thought of as a representation of a normalized classical datapoint \(x\) with \(N\) dimensions, as \[|\phi_{x}\rangle=\sum_{i=1}^{N}x_{i}|i\rangle \tag{4}\] In the given equation, where \(N\) is equal to \(2^{n}\), \(x_{i}\) represents the \(i\)-th element of the variable \(x\), and \(|i\rangle\) refers to the \(i\)-th state in the computational basis. Nevertheless, \(x_{i}\) can be a float or integer. The \(x\) vector must be normalized according to the definition. \[\sum_{i=1}|x_{i}|^{2}=1 \tag{5}\] If the number of features to encode is not a power of 2, the remaining amplitudes can be filled with constant values. The AE technique transforms the 4 features obtained from the classical component into the amplitudes of a quantum state with 2 qubits. \[R_{Y}(\phi)=\begin{bmatrix}\cos(\phi/2)&-\sin(\phi/2)\\ \sin(\phi/2)&\cos(\phi/2)\end{bmatrix} \tag{6}\] \[R_{Z}(\psi)=\begin{bmatrix}e^{-i\psi/2}&0\\ 0&e^{i\psi/2}\end{bmatrix} \tag{7}\] \[CNOT=\begin{bmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\\ 0&0&1&0\end{bmatrix} \tag{8}\] In the variational stage, the number of SEL, \(L\) is variable. SEL consists of generic trainable rotational gates \(Rot(\alpha_{i},\beta_{i},\gamma_{i})\) implemented on qubits 0 and 1, and then a set of CNOT gates are used to connect adjacent qubit pairs, with the last qubit being regarded as a neighbor of the first. The number of SEL for an \(n\)-qubit circuit can be modified to tune the complexity of this circuit. The model has precisely \(3\times n\times L\) number of trainable parameters. The SEL utilizes a circuit-centric approach in its design. In this approach, each individual qubit, denoted as \(G\), is represented by a \(2\times 2\) unitary matrix, as shown in Equation 9, where \(\theta,\phi,\psi\in[0,\pi]\). Figure 6: Quantum circuit representation of QAmplifyNet, featuring two qubits labeled ”0” and ”1”. The circuit comprises variational layers utilizing the SEL approach for two qubits, with a depth of one layer. Initial blue lines depict the embedding of features into the quantum state’s amplitudes. Two \(R_{Y}\)-gates (see Equation 6) introduce \(\frac{\pi}{2}\) rotations to both qubits. Subsequently, two U3 rotation gates involving \(R_{Z}\), \(R_{Y}\), and \(R_{Z}\) (see Equation 7) single-qubit rotations are optimized during training. Blue CNOT (see Equation 8) entangling gates connect qubits 0 and 1, reinforcing their entanglement in a circular topology. The measurement layer includes two \(Pauli-Z\) operators (Graphics generated using Pennylane-Qiskit). \[G(\theta,\phi,\psi)=\begin{bmatrix}e^{i\phi}\cos(\theta)&e^{i\psi}\sin(\theta)\\ -e^{-i\psi}\sin(\theta)&e^{-i\phi}\cos(\theta)\end{bmatrix} \tag{9}\] Due to the lack of support for the "reversible" differentiation method in SEL, PennyLane[3] automatically chooses the most suitable differentiation method available. The state of the two qubits can be measured using the \(Pauli-Z\) operator. Upon measurement, the qubits will collapse to a specific state. The matrix representation of the \(Pauli-Z\) operator is illustrated in Equation (10). \[\sigma_{z}=\begin{bmatrix}1&0\\ 0&-1\end{bmatrix} \tag{10}\] The measurement of the first qubit's \(Pauli-Z\) operator is denoted as \(\langle\sigma_{z}^{0}\rangle\in[-1,+1]\). This expectation value is subsequently utilized to determine the probabilities involved \(P_{notbackorder}\) and \(P_{backorder}\) of being "not backorder" or "backorder" state, respectively: \[P_{notbackorder}=\frac{1}{2}(\langle\sigma_{z}^{0}\rangle+1) \tag{11}\] \[P_{backorder}=\frac{1}{2}(1-\langle\sigma_{z}^{0}\rangle)=1-P_{notbackorder} \tag{12}\] These quantum operations help encode the input features into quantum states and perform quantum computations. The QNode calculates the \(Pauli-Z\) operators' expectation values for each quantum circuit qubit. The QNode's output is then input for the subsequent classical layers. #### Combining Classical and Quantum Parts The classical and quantum parts of the model are seamlessly integrated within the 'Sequential' framework of Keras. The classical layers process the data up to a certain point, and then the output is fed into the quantum layer (KerasLayer), which incorporates the QNode. The Adam optimizer is utilized to train the parameters of the model, which include the weights and biases. During training, the model is optimized based on the binary cross-entropy loss function. The training steps involve iteratively updating the parameters to improve the model's performance and accuracy. We also enabled the 'EarlyStopping' mechanism during the training process, ensuring that the model stops when the desired metric stops improving, which helps prevent overfitting and saves training time. The training procedure took place on a Kaggle kernel environment equipped with 2 CPU cores, 13 GB RAM, and 2 Nvidia Tesla T4 GPUs, each of 15 GB. The model parameters were carefully selected through multiple trial runs to optimize accuracy. The training process concluded after 18 epochs using a batch size of 5. The loss curve of Figure 8 indicates the model's ability to minimize errors during the training and validation phases. Loss curves aid in evaluating the model's learning progress, generalization capability, and potential for effective predictions on new instances. ## Results ### Evaluation Metrics In order to assess the effectiveness of the predictive models employed in this study, various performance metrics have been utilized. In the context of SCBP, True Positive (TP) refers to the number of correctly classified instances of backorder occurrences, while True Negative (TN) represents the number of correctly classified instances of non-backorder occurrences. False Positive (FP) indicates the number of backorder instances mistakenly classified, and False Negative (FN) signifies the number of misclassified non-backorder instances. Both FP and FN are significant as higher FN results in missed opportunities with potential customers, leading to increased opportunity costs. On the other hand, higher FP leads to increased inventory holding costs and a greater risk of product obsolescence due to the long-term accumulation of unnecessary inventory. Here are the definitions and equations for the performance metrics used to evaluate our models: #### Accuracy Accuracy is a metric that evaluates the overall accuracy of predictions by determining the ratio of correct predictions to the total number of predictions made. \[\text{Accuracy}=\frac{TN+TP}{TN+TP+FN+FP} \tag{13}\] #### Precision Precision is a metric that evaluates the accuracy of positive predictions made by a model. It represents the proportion of correctly identified positive instances out of all instances that were predicted as positive. This metric helps assess the model's capability to minimize false positive predictions, providing insights into its precision and reliability in identifying positive cases accurately. \[\text{Precision}=\frac{TP}{FP+TP} \tag{14}\] #### Recall (True Positive Rate) Recall is a metric that quantifies the model's effectiveness in correctly identifying positive instances among all the actual positive instances. It provides insight into how well the model can detect and capture the positive cases in the dataset. \[\text{Recall}=\frac{TP}{TP+FN} \tag{15}\] Figure 8: From left to right, the curves represent the ROC and Loss evolution vs. Epochs of the QAmplifyNet model. These curves provide insights into the model’s ability to classify backorder instances accurately and its overall predictive performance as training progresses. Figure 7: Confusion matrix and classification reports (from left to right) for QAmplifyNet model. #### F1-measure The F1-measure is a metric that combines precision and recall into a single value, giving equal importance to both. It serves as a balanced measure that is particularly beneficial when dealing with imbalanced datasets. By considering both precision and recall, the F1 measure provides a comprehensive evaluation of a model's performance, considering both the ability to correctly identify positive instances (precision) and capture all positive instances (recall). This makes it a valuable metric in uneven class distribution scenarios like the SCBP dataset, as it offers a balanced assessment of the model's effectiveness. \[\text{F1-measure}=2\times\frac{\text{Precision}\times\text{Recall}}{\text{ Precision}+\text{Recall}} \tag{16}\] #### Specificity (True Negative Rate) Specificity is a metric that quantifies the accuracy of a model in correctly identifying negative instances among all the actual negative instances. It provides insight into the model's capability to detect and classify negative instances accurately. \[\text{Specificity}=\frac{TN}{TN+FP} \tag{17}\] #### Gmean The Gmean is a metric represented by the equation 18 that aims to achieve a balance between maximizing the TP and TN. It takes into account both TP and TN while minimizing the adverse effects caused by imbalanced class distributions. It is crucial to acknowledge that the Gmean metric does not offer insights into the specific contributions made by each class towards the overall index. Consequently, various combinations of TN and TP can result in identical Gmean values. \[\text{Gmean}=\sqrt{\text{TP}\times\text{TN}} \tag{18}\] #### Iba IBA is a measure to estimate the performance of binary classifiers on imbalanced datasets using the following equation: \[\text{IBA}=(\text{Gmean})^{2}\times(1+\text{Dominance}) \tag{19}\] Here, _Dominance_ refers to the absolute difference between TP and TN, which is utilized to gauge the relationship between these two measures. By substituting _Dominance_ and _Gmean_ into the equation, we can gain valuable insights into how the IBA balances the trade-off between _Dominance_ and the _Gmean_. #### AUC-ROC index The AUC is a metric that evaluates the overall performance of a model by considering its ability to differentiate between positive and negative instances at various classification thresholds. It is represented graphically as a ROC curve. The AUC serves as an indicator of the model's discriminative power and its capacity to classify different instances accurately. These performance metrics are relevant to the SCBP problem as they provide insights into the model's accuracy, precision, recall, and ability to handle imbalanced datasets. They help assess the model's effectiveness in correctly identifying backorder and non-backorder instances. ## Results and Analysis The results presented in Table 4 demonstrate the performance comparison of different algorithms for the task at hand. Our proposed QAmplifyNet algorithm achieves the highest accuracy score of 90%, outperforming all the other models used in this study. Among the QNN models, MERA 4-layered, Classical NN+Encoder+QNN, and RY-CNOT 6-Layered exhibit respectable accuracy scores of 78%, 77%, and 75%, respectively. Nevertheless, when choosing an ML algorithm, it is vital to take into account factors beyond accuracy as the sole criterion. Considerations such as the algorithm's ability to generalize well to unseen data, its interpretability in providing understandable insights, and its computational efficiency should also be taken into consideration. In our scenario, we have two classes: '0' represents "Not Backorder" and '1' represents "Backorder." While different models demonstrate better performance in either precision or recall, it is essential to consider both measures by assessing the F1-score. QAmplifyNet achieves the best macro-average F1-score of 84%, with 94% for predicting class 0 and 75% for predicting class 1. Given the imbalanced nature of the dataset, we employed the 'imblearn' module from scikit-learn to gain insights into specificity, Gmean, and IBA values. QAmplifyNet yields the highest Gmean (77%) and IBA scores (62% for class 0 and 57% for class 1), outperforming the other models. Furthermore, QAmplifyNet achieved an AUC-ROC value of 79.85%, indicating that the model exhibits stronger discriminatory power than the other models. The AUC-ROC analysis allows us to assess the model's overall ability to rank instances correctly and provides insights into their predictive capabilities. QAmplifyNet achieves the highest macro-average precision and recall scores of 94% and 80%, respectively. Regarding class 0, QAmplifyNet achieves a precision of 88% (see Figure 7), indicating that 88% of instances predicted as class 0 are correctly classified. The recall of 100% signifies that the model successfully identifies all true class 0 instances. The specificity of 60% suggests that the model accurately identifies 60% of the true class 1 instances as class 1. Concerning class 1, the precision of 100% reveals that all instances predicted as class 1 are classified correctly. Nevertheless, with a recall of 60%, it signifies that the model only manages to identify 60% of the actual instances belonging to class 1. On the other hand, a specificity of 100% implies that all instances belonging to class 0 are accurately classified as class 0. QAmplifyNet demonstrated significant outperformance compared to the other models, achieving a macro-average specificity of 80%. This indicates that QAmplifyNet excelled in correctly identifying the negative instances, surpassing the performance of the other models in terms of distinguishing non-backorder cases accurately. Notably, QAmplifyNet consistently demonstrates superior performance across all evaluation metrics: accuracy, AUC-ROC, precision, recall, F1-score, specificity, Gmean, and IBA (macro-average 59.50%). In contrast, other models exhibit inconsistent performance across some of the metrics. The comparison of confusion matrix components, namely TP, TN, FN, and FP, for various models in SCBP is depicted in the provided Figure 9. Upon analyzing the results, it becomes evident that QAmplifyNet outperforms other models in terms of predictive performance. QAmplifyNet achieved a TP rate of 14.98% and a TN rate of 74.91%, demonstrating its ability to classify positive and negative instances accurately. Significantly, it achieved a notable 0% FP rate, signifying a complete absence of incorrect predictions labeling non-backorder instances as backorders. Furthermore, QAmplifyNet exhibited a relatively low FN rate of 10.11%, implying a minimal number of missed positive predictions. In contrast, several models displayed higher FP rates, erroneously identifying actual non-backorders as backorders. Similarly, other models demonstrated higher FN rates, misclassifying backorder cases. This achievement is particularly significant given the imbalanced nature of the SCBP problem. For instance, the MERA 4-Layered and RY-CNOT 6-Layered models achieved 0% FP rates but at the expense of higher FN rates of 22.10% and 25.09%, respectively. Additionally, their TP rates (3% and 0%) were lower than that of QAmplifyNet. Comparatively, the CML models exhibited significantly higher average FP rates (47.99%) and relatively lower average TN Figure 9: Bar plot illustrating the TP, TN, FP, and FN rates of various models used in this paper for SCBP. The obtained values are derived from the confusion matrices of each model, offering valuable information regarding their ability to accurately classify instances as either positive or negative. rates (26.92%). Similarly, the Stacked Ensemble models demonstrated FP rates of 53.18% and TN rates of 21.72%. Classical NN+Encoder+QNN had a 15.73% higher FP rate and a 15.73% lower TN rate compared to QAmplifyNet. It is worth noting that DDQN achieved a high TP rate of 24.34% but at the cost of a substantial FP rate of 50.56%. Conversely, QAmplifyNet achieved a competitive TN rate while maintaining a significantly lower FP rate of 0%, underscoring its robustness in minimizing false positive predictions. The comparison of QAmplifyNet with other models highlights its superiority in achieving a balanced trade-off between TP, TN, FP, and FN rates, resulting in more accurate SCBP with minimal FP and FN predictions. This substantiates QAmplifyNet's potential for enhancing the reliability and robustness of SCBP systems. ### XAI Interpretation using LIME and SHAP To gain insights into the interpretability of QAmplifyNet, we applied two popular XAI techniques: Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) in Python programming language. By employing these methods, we were able to gain insights into the model's predictions and provide explanations by identifying the specific contributions of individual features. _LIME_ LIME is a local interpretability method that provides explanations for individual predictions by approximating the model's behavior around specific instances. By introducing perturbation into the input data and tracking how the hybrid model's predictions changed, we were able to utilize LIME to provide potential explanations for the model's behavior. This process allowed us to identify the most significant features and understand their influence on the model's decision-making. LIME achieves this by generating local surrogate models around a particular instance of interest. These surrogate models are simpler and more interpretable, such as linear or decision tree models, and capture the local behavior of the complex model. LIME \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{**Model Category**} & \multirow{2}{*}{**Models**} & \multicolumn{6}{c|}{**Evaluation Metrics**} \\ \cline{3-10} & & & **Precision** & **Recall** & **F1-score** & **Specificity** & **Gmean** & **IBA** & **Accuracy** & **ROC-AUC** \\ \hline \multirow{8}{*}{CML Models} & 0 & 90\% & 33\% & 48\% & 90\% & 54\% & 27\% & 47\% & 73.83\% \\ \cline{2-10} & 1 & 31\% & 90\% & 46\% & 33\% & 54\% & 31\% & & \\ \cline{2-10} & DT & 0 & 88\% & 33\% & 47\% & 87\% & 53\% & 27\% & 46\% & 68.07\% \\ \cline{2-10} & 1 & 30\% & 87\% & 45\% & 33\% & 53\% & 30\% & & \\ \cline{2-10} & KNN & 0 & 81\% & 39\% & 53\% & 73\% & 53\% & 28\% & & \\ \cline{2-10} & 1 & 29\% & 73\% & 41\% & 39\% & 53\% & 29\% & & \\ \cline{2-10} & LGBM & 0 & 88\% & 33\% & 47\% & 87\% & 53\% & 27\% & 46\% & 75.23\% \\ \cline{2-10} & RF & 0 & 88\% & 33\% & 47\% & 87\% & 53\% & 27\% & 46\% & 73.96\% \\ \cline{2-10} & 1 & 30\% & 87\% & 45\% & 33\% & 53\% & 30\% & & \\ \cline{2-10} & SVM & 0 & 84\% & 39\% & 53\% & 78\% & 55\% & 29\% & 49\% & 63.74\% \\ \cline{2-10} & XGBoost & 1 & 30\% & 87\% & 43\% & 39\% & 55\% & 31\% & \\ \cline{2-10} & XGBoost & 0 & 88\% & 33\% & 47\% & 87\% & 53\% & 27\% & 46\% & 71.90\% \\ \cline{2-10} & 3 Dense Layered NN & 1 & 30\% & 87\% & 45\% & 33\% & 53\% & 30\% & & \\ \cline{2-10} & 3 Dense Layered NN & 0 & 87\% & 47\% & 61\% & 79\% & 61\% & 36\% & 55\% & 72.52\% \\ \cline{2-10} & 1 & 33\% & 79\% & 47\% & 47\% & 61\% & 38\% & & \\ \hline \multirow{4}{*}{Stacked Ensemble Models} & QSVM+LGBM+LR & 0 & 91\% & 29\% & 44\% & 91\% & 51\% & 25\% & 45\% & 70.00\% \\ \cline{2-10} & 1 & 30\% & 91\% & 45\% & 29\% & 51\% & 28\% & & \\ \cline{2-10} & VQC+QSVM & 0 & 94\% & 29\% & 44\% & 94\% & 52\% & 25\% & 45\% & 62.00\% \\ \cline{2-10} & VQC+LGBM & 1 & 31\% & 94\% & 46\% & 29\% & 52\% & 29\% & & \\ \cline{2-10} \cline{2-10} & \multirow{2}{*}{MERA 1-Layered} & 0 & 91\% & 29\% & 44\% & 91\% & 44\% & 25\% & & \\ \cline{2-10} & 1 & 30\% & 91\% & 45\% & 29\% & 45\% & 28\% & & \\ \hline \multirow{4}{*}{QNN Models} & MERA 1-Layered & 0 & 83\% & 47\% & 60\% & 72\% & 58\% & 33\% & \\ \cline{2-10} & 1 & 31\% & 72\% & 43\% & 47\% & 58\% & 35\% & & \\ \cline{2-10} & MERA 2-Layered & 0 & 83\% & 29\% & 43\% & 82\% & 49\% & 23\% & \\ \cline{2-10} & 1 & 28\% & 82\% & 42\% & 29\% & 49\% & 25\% & & \\ \cline{2-10} & 0 & 77\% & 100\% & 87\% & 12\% & 35\% & 13\% & & \\ \cline{2-10} & 1 & 100\% & 12\% & 21\% & 100\% & 35\% & 11\% & & \\ \cline{2-10} & RY-CNOT 6-Layered & 0 & 75\% & 100\% & 86\% & 0\% & 0\% & & \\ \cline{2-10} & 1 & 0\% & 0\% & 0\% & 100\% & 0\% & 0\% & 0\% & \\ \cline{2-10} & \multirow{2}{*}{Classical NN+Encoder+QNN} & 0 & 80\% & 89\% & 84\% & 79\% & 75\% & \(-\) & \\ \cline{2-10} & 1 & 71\% & 53\% & 61\% & 79\% & 75\% & \(-\) & & \\ \hline Deep RL Model & DDQN & 0 & 88\% & 33\% & 47\% & 87\% & 53\% & 27\% & 46\% & 47.58\% \\ \cline{2-10} & 1 & 30\% & 87\% & 45\% & 33\% & 53\% & 30\% & & \\ \hline **Proposed** & **QAmplifyNet** & 0 & **88\%** & **100\%** & **94\%** & **60\%** & **77\%** & **62\%** & **90\%** & **79.85\%** \\ \hline \end{tabular} \end{table} Table 4: Performance comparisons of the models used in this study against QAmpliNet on short SCBP dataset examines the significance and influence of individual aspects on the model's decision-making process by perturbation of the input features and evaluating the ensuing changes in predictions. The equation of LIME can be expressed as follows: \[\gamma(x)=\arg\min_{h\in H}L(f,h,\pi_{x})+\lambda(h) \tag{20}\] Here, the loss function \(L\) quantifies the similarity between the original, sophisticated model \(f\) and the interpretable model \(h\). The family of interpretable models is denoted by \(H\), while \(\pi_{x}\) represents the closeness of the instances being evaluated to a specific instance \(x\). The term \((h)\) indicates the significance or importance assigned to the model \(h\), which can involve additional weighting or importance factors. LIME aims to find the interpretable model \(h\) that minimizes the loss function and adequately captures the complex model \(f\) behavior while considering the proximity and criticality aspects. The family of interpretable models is denoted by \(H\), while \(\pi_{x}\) represents the closeness of the instances being evaluated to a specific instance \(x\). The term \((h)\) indicates the significance or importance assigned to the model \(h\), which can involve additional weighting or importance factors. LimeTabularExplainer is a specific implementation of LIME designed for tabular data. It leverages the idea of perturbation by generating perturbed instances around the instance of interest. LimeTabularExplainer constructs a local surrogate model by fitting a weighted linear model to these perturbed instances. The weights assigned to each perturbed instance reflect their similarity to the original instance, and the model's predictions on these instances are used to approximate feature importance. Figure 10(a) shows the LIME-based feature importance bar plot showcasing the explanations for a specific instance's prediction. The plot visualizes individual features' contributions towards classifying the instance into "No Backorder" or "Backorder" categories. Figure 10(b) depicts the LIME-generated explanation plot for another instance, depicting the feature importance and their contributions to the prediction. The features 'PC1', 'PC2', 'PC3', and 'PC4' are considered, and the predicted probabilities are obtained using the model. _Shap_ SHAP is another popular XAI technique that provides global interpretability by attributing the model's predictions to individual features across the entire dataset. Therefore, the contribution of each feature in the prediction was computed and visualized using the SHAP Python library. By utilizing SHAP values from cooperative game theory, SHAP quantifies each feature's influence on a prediction. They provide a quantitative assessment of how much each feature contributes to the overall prediction, denoted as \(\phi_{j}(x)\), which is defined as follows: \[\phi_{j}(x)=\frac{1}{m!}\sum_{s\subseteq\{x_{1},x_{2},\ldots,x_{m}\}\setminus \{x_{j}\}}\frac{|s|!(m-|s|-1)!}{m!}\times(\text{val}(s\cup\{x_{j}\})-\text{ val}(s)) \tag{21}\] In Equation 21, \(\phi_{j}(x)\) represents the Shapley value for the feature \(x_{j}\), where \(x_{j}\) denotes a specific feature value. The feature subset of the model is denoted as \(s\). The parameter \(m\) represents the total number of features in the model. The term \(val(s)\) represents the projection of feature values in the set \(s\). This equation calculates the Shapley value by summing over all possible subsets \(s\), considering their cardinality and the difference between the valuation of the subset including \(x_{j}\) and the valuation of the subset excluding \(x_{j}\). The division by \(m!\) and the factorials account for the different permutations and combinations of the subsets. We applied SHAP to our hybrid model to understand the importance and influence of different features in determining the model's output. SHAP values explain how the model's expected or base output, denoted by \(E[f(x)]\), transitions to the actual output, denoted as \(f\) when the specific features \((x)\) are not known. These values quantify each feature's contribution to the prediction and indicate the pattern of connection between the features and the target variable \(y\). When the SHAP value of a feature is close to -1 or +1, it substantially impacts the prediction of that data point. On the other hand, a SHAP value close to 0 for a feature indicates less importance in making the prediction. Figure 11(a) displays the impact of each PC on the prediction for the specific test instance. Figure 11(b) shows how each feature contributes to shifting the model's output from the expected value. ## Discussion SCM is a complex and critical process that relies heavily on accurate prediction of backorders to optimize inventory control, reduce costs, and ensure customer experience. This research introduced a groundbreaking hybrid Q-CNN model called QAmplifyNet for SCBP, which integrates quantum-inspired techniques into the conventional ML framework. The discussion aims to comprehensively analyze the proposed model's benefits, limitations, practical implications, and potential applications. The integration of quantum-inspired techniques in the proposed model offers several advantages over classical and hybrid models. Firstly, the utilization of quantum-inspired algorithms enables the model to grasp intricate data patterns and interdependencies, which is crucial for accurate SCBP. The parallelism inherent in QC allows for more efficient solution space exploration, leading to improved prediction accuracy. QAmplifyNet benefits from the flexibility and interpretability of the Keras NN framework. Combining quantum-inspired optimization algorithms with Keras's well-established architecture enhances the model's overall performance and interpretability. Our proposed model demonstrates robustness in handling short, imbalanced datasets commonly encountered in SCM. By employing a combination of preprocessing techniques, undersampling, and principal component analysis, the model effectively addresses the challenges posed by limited data availability and class imbalance. While QAmplifyNet offers numerous advantages, it is important to acknowledge its limitations. One potential limitation is the computational complexity associated with quantum-inspired techniques. QC is still in its nascent stages, and the current hardware limitations, such as noise and limited qubit connectivity, can hinder the scalability and practical implementation of quantum algorithms. Therefore, the proposed model may face challenges when scaling up to larger datasets or real-time applications. Additionally, the training and optimization of quantum-inspired models require specialized knowledge and expertise. The integration of quantum and classical components in the proposed model adds complexity, requiring researchers and practitioners to have a strong understanding of both QC principles and traditional ML techniques. Accurate SCBP has significant practical implications for various aspects of SCM. By leveraging the proposed model, organizations can optimize inventory control, reduce backorders, and enhance customer satisfaction. The ability to predict backorders enables proactive management of inventory levels, minimizing stockouts, and ensuring the availability of products to meet customer demands. This, in turn, leads to improved customer loyalty and increased revenue opportunities. The Figure 10: The figure comprises two subfigures illustrating the LIME explanations for different instances in the classification task. (a) displays a bar plot depicting the feature importance for a specific instance, while (b) exhibits the LIME-generated explanation plot for another instance. Both (a) and (b) highlight the contributions of the features ’PC1’, ’PC2’, ’PC3’, and ’PC4’ towards the predictions, providing insights into the classification process of the model. accurate prediction of backorders allows for more efficient resource allocation. Organizations can optimize their production schedules, procurement processes, and transportation logistics based on predicted demand, leading to cost savings and improved operational efficiency. Additionally, accurate SCBP facilitates better supplier communication and coordination, ensuring timely replenishment and minimizing delays. Our proposed model can be seamlessly integrated into real-world SCM systems. Organizations can enhance their decision-making processes and automate SCBP by incorporating the QAmplifyNet into existing inventory management software. This integration enables real-time monitoring of inventory levels, proactive order fulfillment, and efficient allocation of resources. The model can also be used to identify potential bottlenecks or vulnerabilities in the supply chain, allowing organizations to implement preventive measures and improve overall supply chain resilience. QAmplifyNet has the potential for broader applications beyond SCM. The hybrid nature of this model makes it adaptable to other supervised binary classification tasks, such as credit card default prediction or fraud detection, where imbalanced datasets and limited feature sets are common challenges. The comparative analysis sheds light on the strengths and weaknesses of each model, which has direct implications for SCBP. QAmplifyNet emerges as the top-performing model, consistently demonstrating strong performance across multiple evaluation metrics, including accuracy, F1-score, specificity, Gmean, IBA, and AUC-ROC. Its ability to achieve high accuracy and F1-scores indicates its effectiveness in correctly predicting positive and negative instances, which is crucial for efficient SCM. The superior performance of QAmplifyNet in various metrics implies that it can effectively minimize false positives and false negatives, addressing the challenge of imbalanced data in SCBP. This is particularly noteworthy given the significant impact of FPs and FNs on inventory management and customer satisfaction. Businesses may improve customer satisfaction, reduce the likelihood of disruptions, and maximize inventory efficiency by quickly and correctly detecting instances at risk of backorders. However, it is essential to consider the practical implications beyond model performance metrics. Factors such as generalizability, interpretability, and computational efficiency are critical for real-world implementation. QAmplifyNet exhibits strong generalization capabilities, as evidenced by its robust performance on the validation dataset. Its incorporation of amplification techniques ensures scalability and computational efficiency, enabling timely predictions for large-scale supply chain operations. Interpretability is also a crucial factor in supply chain decision-making. While QAmplifyNet performs exceptionally well in terms of accuracy and other metrics, its black-box nature may limit the understanding of how and why specific predictions are made. To address this, we presented the interpretability of QAmplifyNet using SHAP and LIME. Figure 11: (a) SHAP force plot on the selected instance using the KernelExplainer and (b) SHAP summary plot showing the features’ contributions to the misclassifications in the QAmplifyNet model. ## Conclusion and Future Work In this research, our primary contribution lies in the development of QAmplifyNet, a novel hybrid Q-CNN model designed explicitly for backorder prediction in the supply chain domain. By harnessing the power of quantum-inspired techniques within the well-established Keras framework, we aimed to enhance the accuracy of backorder prediction significantly. Furthermore, we proposed a comprehensive methodological framework encompassing various stages, including data source identification, data collection, data splitting, data preprocessing, and implementing the QAmplifyNet model. To ensure the optimal performance of our model, we thoroughly explored seven different preprocessing alternatives and meticulously evaluated their effectiveness by assessing the performance of LR on each preprocessed dataset. This rigorous evaluation process allowed us to select the most suitable preprocessing technique for our specific application. Through extensive experiments and evaluations on a short SCBP dataset, we compared the performance of QAmplifyNet with eight traditional CML models, three classically stacked quantum ensemble models, five QNN models, and one deep RL model. Our findings clearly demonstrate the exceptional backorder prediction accuracy achieved by QAmplifyNet, surpassing all other models in terms of accuracy with an impressive 90% accuracy rate. Notably, QAmplifyNet also achieved the highest F1-score of 94% for predicting "Not Backorder" and 75% for predicting "Backorder," outperforming all other models. Additionally, QAmplifyNet exhibited the highest AUC-ROC score of 79.85%, further validating its superior predictive capabilities. By seamlessly integrating quantum-inspired techniques into our model, we successfully captured complex patterns and dependencies within the data, leading to significant improvements in prediction accuracy. The significance of the proposed model lies in its ability to optimize inventory control, reduce backorders, and enhance overall SCM. Accurate SCBP enables proactive decision-making, efficient resource allocation, and improved customer satisfaction. By integrating the QAmplifyNet into real-world supply chain systems, organizations can achieve cost savings, increased revenue opportunities, and improved operational efficiency. By implementing XAI techniques, specifically SHAP and LIME, we could successfully enhance the interpretability of the proposed model. Understanding the model's decision-making process was greatly aided by these XAI techniques, shedding light on the significance and contribution of different features in predicting backorders. By leveraging SHAP and LIME, we were able to gain a deeper understanding of how the model arrived at its predictions and identify the key factors influencing those predictions. There are several promising avenues for future work in this field. Firstly, further improvements can be made to the proposed model by exploring additional quantum-inspired techniques and algorithms. As the field of QC continues to advance, more efficient quantum hardware and algorithms are expected to become available, which could enhance the performance and scalability of the model. Expanding the dataset used for training and evaluation could further improve the model's accuracy and generalizability. The model's ability to capture a wider variety of patterns and trends might benefit from the incorporation of a more extensive and diversified dataset. Furthermore, the potential for applying the proposed model in other domains of SCM, such as demand forecasting or inventory optimization, warrants exploration. The versatility of QML models opens up opportunities for their application in various aspects of supply chain operations. In addition to introducing a novel strategy for SCBP by making use of the hybrid Q-CNN, this study is notable for being the first use of QML in the field of SCM. The results stress the necessity of quantum-inspired methods to enhance prediction accuracy and optimize SCM. Future research has the potential to change the field of SCM and stimulate breakthroughs in QML models by continuing to improve the model, expanding the dataset, and exploring other quantum-inspired approaches. ## Data availability This research used the scikit-learn package for CML trials [36]: [https://scikit-learn.org/stable/](https://scikit-learn.org/stable/). This is a link to the readily accessible SCBP dataset: _"Can you predict product backorder?"_.
2308.00211
High-fidelity achromatic metalens imaging via deep neural network
Meta-optics are attracting intensive interest as alternatives to traditional optical systems comprising multiple lenses and diffractive elements. Among applications, single metalens imaging is highly attractive due to the potential for achieving significant size reduction and simplified design. However, single metalenses exhibit severe chromatic aberration arising from material dispersion and the nature of singlet optics, making them unsuitable for full-color imaging requiring achromatic performance. In this work, we propose and validate a deep learning-based single metalens imaging system to overcome chromatic aberration in varied scenarios. The developed deep learning networks computationally reconstruct raw imaging captures through reliably refocusing red, green and blue channels to eliminate chromatic aberration and enhance resolution without altering the metalens hardware. The networks demonstrate consistent enhancement across different aperture sizes and focusing distances. Images outside the training set and real-world photos were also successfully reconstructed. Our approach provides a new means to achieve achromatic metalenses without complex engineering, enabling practical and simplified implementation to overcome inherent limitations of meta-optics.
Yunxi Dong, Bowen Zheng, Hang Li, Hong Tang, Yi Huang, Sensong An, Hualiang Zhang
2023-08-01T01:04:46Z
http://arxiv.org/abs/2308.00211v1
# High-fidelity achromatic metalens imaging via deep neural network ###### Abstract Meta-optics are attracting intensive interest as alternatives to traditional optical systems comprising multiple lenses and diffractive elements. Among applications, single metalens imaging is highly attractive due to the potential for achieving significant size reduction and simplified design. However, single metalenses exhibit severe chromatic aberration arising from material dispersion and the nature of singlet optics, making them unsuitable for full-color imaging requiring achromatic performance. In this work, we propose and validate a deep learning-based single metalens imaging system to overcome chromatic aberration in varied scenarios. The developed deep learning networks computationally reconstruct raw imaging captures through reliably refocusing red, green and blue channels to eliminate chromatic aberration and enhance resolution without altering the metalens hardware. The networks demonstrate consistent enhancement across different aperture sizes and focusing distances. Images outside the training set and real-world photos were also successfully reconstructed. Our approach provides a new means to achieve achromatic metalenses without complex engineering, enabling practical and simplified implementation to overcome inherent limitations of meta-optics. Meta-optics, Metalens, Deep Learning, Neural Networks, Imaging, 3D Printing. ## 1 Introduction The advancement of modern camera systems has led to multi-element lens configurations to minimize optical aberrations and achieve high-resolution imaging. However, these systems sacrifice compactness. Metasurfaces, the two-dimensional metamaterial analog of optical components, provide transformative opportunities to realize high-performance optics within substantially reduced volumes. Here, we utilize metalenses - metasurface lenses with carefully engineered nanoscale scattering elements that impart precise phase profiles - to demonstrate imaging capabilities analogous to conventional refractive optics. Notably, metalenses overcome the challenge of spherical aberration that has persisted in refractive optics. By imparting precise phase delays with subwavelength spatial resolution, they facilitate diffraction-limited focusing absent from traditional refractive optical systems due to the spherical shape of traditional lenses. Additionally, the capability to readily adapt the phase profile through computational nanophotonic design of the meta-atoms grants flexibility and customizability surpassing conventional optics. However, a pivotal roadblock for the wide deployment of meta-optics is chromatic aberration. Due to significant material dispersion and dispersive responses of metasurfaces, different spectral components passing through metalenses will focus on disparate spatial planes, negatively impacting image quality. Existing strategies to mitigate chromatic aberration include cascaded multi-layer metalenses[1, 2, 3, 4], interleaving meta-atoms for different wavelengths[5, 6, 7], metalens arrays[8], dispersion correction phase mask[9, 10, 11, 12], increased focusing depth[13] and computational optimization and correction of phase profiles[14, 15]. But these approaches increase system complexity while sacrificing other performance metrics such as scalable high-yield fabrication, imaging quality and freedom of material choices. Consequently, a single meta-lens solution capable of full-color aberration-free imaging under diverse operating conditions remains elusive. In this paper, we successfully demonstrate correction of chromatic aberration to achieve an achromatic metalens camera through integration of a custom-designed metalens with a commercial imaging sensor, coupled with deep learning algorithms. Our deep learning-based computational imaging approach refocuses and restores missing information of the raw captured images for RGB channels, effectively converting a single chromatic metalens camera into an achromatic imaging system. With this strategy, light (i.e. broadband optical signals) can be manipulated within substantially thinner flat optical components compared to the state-of-the-art while still maintaining full-color and aberration-free operation. To collect multi-spectral training data, we employ a 3D-printed adapter for the integration of metalens onto a commercial camera. As for the computational imaging backend, a universal deep neural network architecture built on U-net is used to achieve direct chromatic aberration correction. By training with raw images under varying conditions, the model reliably enhances image quality, removes chromatic aberration, effectively reconstructs the photos either from or outside of the training dataset. The trained model can also be used for enhancing real-world captures, which further demonstrates its capability to replace complex lens assemblies for high-quality full-color imaging. ## 2 Results ### Imaging system workflow, DL model and experimental setups An achromatic single metalens imaging system presents considerable difficulty due to the requisite restoration of all color channels lacking ideal imaging responses. To address this issue, we integrate deep learning networks as the computational backend to directly enhance the chromatic responses of the raw image captures. A highly automated workflow for collecting and pre-processing raw images was developed to enable the proposed deep learning approach as depicted in Fig. 1. Specifically, Fig. 1a shows the optical path with the metalens directly mounted on a camera, where \(d\) denotes the object distance and \(A\) denotes the aperture diameter. The aperture is placed in front of the metalens L, and its diameter is equal to or smaller than the metalens to block light outside the metalens area. Fig. 1b illustrates the assembled metalens with a 3D-printed mount on a commercial camera (Sony Alpha a7R IV). Changing the object Figure 1: **Overview of metalens imaging and reconstruction.** (a) Optical path in metalens camera system, with light passing through aperture, then metalens which directly focuses light onto CMOS image sensor. (b) Photograph of fabricated metalens mounted on a commercial camera. (c) Example source image displayed on monitor. (d) Schematic of image reconstruction workflow, including preprocessing of raw images and deep learning model for reconstruction. (e) Sample images from training and testing datasets used for deep learning model. (f) Additional validation image samples showing different objects and color representations. (g) Photograph of fabricated metasurface lens. (h) Scanning electron microscope (SEM) images showing nanostructured meta-atoms comprising metalens. (i) Cropped regions of raw red, green, and blue color channel subimages directly captured by metalens camera. (j) Reconstructed red, green, and blue channel subimages after processing through the proposed deep learning network. distance \(d\) and aperture diameter \(A\) alters working conditions of the metalens, making it suitable for a variety of applications. One of our goals is to devise universal deep learning models as illustrated in Fig. 1d to accommodate different combinations of \(d\) and \(A\). In this work, we utilized monitors of various sizes to display images (as depicted in Fig. 1c), as well as accommodating different object distances (\(d\)). To conform to the image circle of the metalens, the images' aspect ratios were intentionally set to 1, with all remaining monitor pixels set to black. The resulting captured image, positioned at the sensor's center as shown in the left corner of Fig. 1d, was cropped to eliminate black pixels and used as input for the developed deep learning network. Inspired by the successful application of image super-resolution networks, we developed a U-Net-structured deep learning model. This state-of-the-art architecture, widely applied in image processing tasks[16, 17], features skip connections that bridge contracting and expanding paths, enabling the capture of both global and local contexts. Our model enhances the original U-Net architecture by incorporating multiple skip and residual connections between layers, capturing multi-scale contexts and providing nuanced features as shown in Fig. 1d. Inter-skip connections link the encoder and decoder blocks within the U-Net model, while intra-skip connections, exclusive to the decoder blocks, link different layers within them, and conventional skip connections denote the original connections within the U-Net model[18] The structure of the encoder and decoder blocks, comprising several convolutional and upsampling layers, is detailed in the supplementary material. To train the deep learning models, we utilized the Taskonomy indoor scene dataset[19], examples of which are shown in Fig. 1e. This dataset contains 1024 \(\times\) 1024-pixel images from various buildings, providing diversity in environments and objects under consistent lighting. The resolution matched our 1920 \(\times\) 1200-pixel monitors used for data collection, as illustrated in Fig. 1c. For each combination of object distance \(d\) and aperture diameter \(A\), we selected 1000 images from the dataset to display on the monitors and capture with our metalens camera system. Of the 1000 raw images, 800 were used for training and 200 for validation for each setting (with different \(d\) and \(A\) combination). After convergence of the network training process, we applied an additional validation set with completely different objects, colors, and lighting conditions to assess the performance of the trained network, as shown in Fig. 1f. The results were consistent across both the training/testing sets and the additional validation set. One of the examples of these results is shown in Fig. 1i and Fig. 1j, which features raw captures using a 4 mm aperture diameter to capture a scene at a 50 cm distance, and its reconstructed counterpart using the trained network. The raw images predominantly contain sharp image components in the green channel, with the red and green channels significantly out of focus, aligning with our assumptions for the employed metalens. Remarkably (as shown in Fig. 1j), processing the raw captures through the deep learning network yielded a reconstructed image with clear images across RGB channels, indicating the network's ability to eliminate achromatic aberration by refocusing the image on its all three channels. Each channel benefits from an improvement in sharpness, and achromatic full-color imaging is achieved by combining all three channels. For further performance analysis details, please refer to the supplementary material. Our proposed deep learning engine for computational achromatic metalens imaging presents several notable advantages. Firstly, the incorporation of deep learning renders further metalens design for chromatic aberration correction unnecessary (which leads to simplified metalens implementation and reduced cost). Secondly, it eliminates the requirement for supplementary devices or steps from the initial photo capture to the final image reconstruction. Lastly, this method can be readily implemented on any commercial or scientific optical systems. To the best of our knowledge, the proposed image reconstruction network represents the first successful application of a deep learning tool for addressing aberrations in chromatic meta-lens imaging captured directly from a commercial camera. ### Designed metalens and integration In this work, a hyperbolic phase profile[20] was employed for the metalens, which offers several advantages. Firstly, the hyperbolic phase profile works well with the external aperture, as the entire lens area is designed to focus to the geometric center point. This allows for the inclusion of an external aperture without altering the focal length or compromising the imaging uniformity. Additionally, misalignment between the aperture and the metalens does not impact the imaging performance, as long as the transparent part of the substrate is fully blocked. Notably, the aperture size plays a crucial role as it impacts both the imaging resolution and chromatic aberration. Smaller apertures improve resolution and reduce chromatic aberration image-wide, yet larger apertures enable greater light transmission beneficial for low-light conditions. Our approach provides the flexibility to incorporate various sizes of external apertures using different 3D-printed holders, eliminating the need to fabricate metalenses of different sizes. Furthermore, it opens up the possibility of integrating a mechanical leaf aperture, similar to those found in traditional lenses. Our metalens was designed and fabricated on a 10 mm by 10 mm Silicon-on-Sapphire wafer with 230 nm Silicon thickness. It has a 5 mm diameter with 7 mm focal length, and the meta-atoms were optimized for operation at a wavelength of 526 nm. More information about the metalens can be found in the supplementary material. Meanwhile, hyperbolic phase profile has notable drawbacks, including compromised peripheral image quality stemming from unoptimized edges. This is manifested as reduced sharpness and increased chromatic aberration towards the image boundaries. For example, it is oberved that edge trapezoids in Fig. 2a appear less defined versus the center. Rainbow effects under white light further underscore greater chromatic aberration at the periphery. Additional limitations of hyperbolic lenses arise from variable field-of-depth and lateral chromatic aberration across different focal planes and object distances. This leads to captured images exhibiting differently sized in-focus areas and distinct chromatic aberration patterns depending on distance, as evidenced by the Modulation Transfer Function (MTF) results in Fig. 2(b-c)[21]. Fig. 2(b) and 2(c) display four combinations of 10 cm (representing close focusing) and 50 cm focusing distances (simulating focusing to infinity) with 1 mm and 4 mm aperture diameters (f-numbers of 7 and 1.75). Fig. 2(b) shows center and edge MTF curves derived from denoted trapezoids in Fig. 2(a) under white backlight on monitors. Contrastingly, Fig. 2(c) exhibits the photo's green channel with green backlight. Regardless of conditions, a notable MTF difference exists between center and edges, with higher center values. Overall, it is clear that increasing aperture Figure 2: **Metalens performance characterization.** (a) Test charts imaged under white and green illumination. (b-c) Modulation transfer functions (MTFs) for the center and edge regions of images captured under white and green light. Four combinations of object distance and aperture diameter were tested for each condition. diameter decreases MTF values significantly, especially at edges, thus a universal deep learning network should be trained on various aperture sizes to handle these dramatic differences. Chromatic aberration reduces MTF values, as is evident from comparing the center of smaller apertures to the edge of larger ones between Fig. 2(b) and 2(c). The marked MTF difference between green channel and white light data suggests chromatic aberration primarily causes decreased image quality in these scenarios. Conversely, for instances involving the center of larger apertures and the edge of smaller ones, the difference becomes less pronounced, with both white and green MTF values displaying similar trends. This suggests that specific image reconstruction algorithms should be applied for enhancing small and large aperture cases, given the unique sources of image blurriness in each. Although the MTF curves do not exhibit significant differences across varying object distances, the patterns of color fringing do display noticeable variations. Fig. 2(d) presents two images, cropped from the center of photos captured at 10 cm and 50 cm distances. No apparent differences in sharpness exist between these two images, yet the color fringing patterns around white edges differ significantly. The 10 cm photo exhibits blue to cyan and yellow to orange color fringing transitions at the far and near ends of the faucet, respectively. However, a reverse fringing transition pattern is observed when images were captured at 50 cm (it exhibits orange to yellow and cyan to blue transitions instead). These distinct color fringing patterns underscore the influence of object distance on the effects of chromatic aberration in captured images. The MTF curves and color fringing analyses reveal key insights into factors impacting image quality in meta-optics systems. Specifically, aperture size and object distance significantly influence aberrations and resolution. Therefore, to comprehensively improve photo quality, the effects of varying aperture diameter and shooting distance must be considered in tandem. In our work, we have been focusing on studying these two factors and their interactions, to determine optimal deep learning model architectures, formation of proper experiment setup and training strategies to enhance image quality across diverse operating conditions. In general, we could train universal deep learning models applicable to a wide range of potential use cases rather than being constrained to a narrow set of parameters. ### Imaging results Figure 3: **Experimental setup and image reconstruction examples.** (a) Top view of the photo capturing setup. (b) 3D-printed metalens holder with adjustable aperture. (c) Reconstructed images from testing data (first column) and validation set (second column). (d) Real-world photos captured through the metalens with different aperture sizes and reconstructed by the proposed network. The setup for raw photo capturing is depicted in Fig. 2(a). It consists of two monitors of varying sizes fixed on an optical table, facing towards a commercial camera integrated with a single metalens. The larger monitor (24" HP LP2475W) and the smaller one (5.2" Atomos Ninja V) are located 50 cm and 10 cm away from the camera, respectively. The monitors were employed individually during the experiment. The camera, set on a post, was adjusted to be parallel with the monitor and captured images using the center of the CMOS sensor, with the raw image displayed in Fig. 1d. A 3D-printed holder, depicted in Fig. 2(b), was designed to attach the metalens to the camera. This holder is composed of two parts: the upper component is threaded into a C-Mount adapter attached to the camera, while the lower section holds the 10 mm by 10 mm metalens sample. These two parts are threaded together, with the aperture located on the lower component to facilitate the interchange of varying aperture sizes. Upon assembly, the metalens is pushed into place by the upper component, reducing any undesired gap between the lens and the holder. Based on the previously described setup, raw images were captured and used to train the proposed deep learning models. The model's performance, as demonstrated in Fig. 2(c), was assessed with test images from both the training and validation sets. The ground truth image, randomly chosen from Taskonomy dataset, the raw image directly sourced from the camera, and the reconstructed image from the output of the deep learning model, are all presented in Fig. 2(c). To quantitatively measure the enhancement in image quality from raw to reconstructed images, we utilized two primary metrics: the peak signal-to-noise ratio (PSNR)[22] and the structural similarity index measure (SSIM)[23]. Higher PSNR values typically indicate reduced noise and improved image detail fidelity, while SSIM indicates measure similarity between two images. By comparing the PSNR and SSIM values of both the raw and reconstructed images to the ground truth image, we were able to quantify image quality improvements enabled by the proposed technique. The PSNR and SSIM values, calculated for the respective raw and reconstructed images, are displayed in Fig. 2(c) (shown at bottom right). It is obvious that our deep learning models effectively mitigated chromatic aberrations and increased image sharpness by refocusing all color channels. The model's reconstruction process successfully restored accurate color representations and considerably enhanced overall image contrast, yielding a gain of over 10 dB in PSNR and a 35% increase in SSIM values for the training set images. The computations revealed notable enhancements in image quality through our reconstruction method compared to the raw images. To fully validate our model's versatility, we conducted extensive testing on entirely new types of images beyond the indoor training data. As shown in Fig. 1f, we utilized an additional validation set of diverse scenes with various objects, lighting conditions and tones. Without any further training or parameter tuning, our pre-trained model successfully reconstructed these never-before-seen images. As evidenced in Fig. 2(c) (right column), our network reliably restored color and focus for these general validation images. Quantitatively, the developed deep learning network enhanced image quality by over 9dB in peak signal-to-noise ratio and approximately 36% in structural similarity index. These impressive gains aligned with those observed on the indoor training images, conclusively demonstrating the model's robustness and applicability to real-world scenes. Lastly, we applied the model to reconstruct real-world scenes taken both indoors and outdoors. Unlike controlled scenes using monitors, real-world objects feature a significantly larger depth-of-field and varied lighting conditions, making reconstruction remarkably more challenging. Despite these complexities, the network consistently performed well. The raw and reconstructed images are shown in Fig. 2(d), featuring one indoor and two outdoor photos. In the raw images, reduced dynamic range (evidenced by hazing) and chromatic aberration are noticeable, which diminish image quality and make distinguishing objects and characters difficult, especially in ample light conditions and near high-contrast areas. It is noted that, after the deep learning model reconstructed the images, the image quality improved significantly under all conditions, regardless of ambient lighting or shooting distances. These results further validate our model's universality and adaptability beyond its training set. ## 3 Discussion As demonstrated in previous sections, the proposed deep learning approach successfully restored full-color images from raw captures of the single metalens camera. Both quantitative metrics and visual interpretation confirmed significant enhancement of image quality compared to the unprocessed raw images exhibiting chromatic aberration. This represents a major advancement for single metalens imaging systems (which have faced persistent challenges in achieving achromatic performance). Inherent material dispersion limits metalens bandwidth when relying solely on optical and metasurface design innovations. Despite efforts exploring multi-layer systems, new materials, and hybrid meta-refractive concepts, realizing wide-band achromatic responses from a single nanostructured meta-optics device has remained elusive. Our proposed deep learning-based computational imaging engine provides a transformative solution to overcoming these physical constraints. By applying specialized deep learning models directly to raw captured images, we accomplish full-color aberration-free imaging without requiring complex metalens/metasurface engineering, reducing design and fabrication difficulties, improving tolerance, and enabling faster turnaround. To further validate performance and gain additional insights, we conducted detailed studies on the reconstructed images. Fig. 4(a) shows further analysis and comparisons across setups. A high dynamic range image from the training set was chosen given the challenge of preserving both dark and bright details using metalenses (this is evident in the raw photos where even using a small aperture leads to hazing and chromatic aberration. It is also observable through the zoomed-in view of details at the center and edge of the image and placed on the bottom of each photo). Enlarging the aperture rapidly worsens image quality (e.g. making the shoes at the edge of the photos barely distinguishable). The central image quality exceeds the edges but still lacks details in dark regions. Increasing object distance also degrades edge image quality, as at constant angular resolution and chromatic aberration ratio, greater distances lead to reduced resolution and increased chromatic aberration per pixel. Notably, our deep learning models can handle the varying challenges across different setups and consistently produce promising results. As shown in Fig. 4(a), the reconstructed images on the right of each setup remove strong color fringing and accurately restore dark region details without losing the bright details. The reconstructed images' strong similarities across different setups indicate the developed deep learning networks are universal and insensitive to aperture size, lighting conditions, and object distance. Figure 4: **Image reconstruction for different scenarios and algorithms.** (a) Raw and reconstructed images for all experimental combinations, with enlarged details from image centers and edges below. (b) Ground truth, raw single metalens image, and reconstructions from the proposed network and other existing networks. Only the proposed network successfully reconstructs the single metalens image. To demonstrate the uniqueness of our reconstruction approach, it is necessary to show that existing general-purpose computational imaging networks fail to effectively reconstruct images from our metalens. As shown in Fig. 4b, we benchmarked leading super-resolution and enhancement models by upsampling our raw images and then downsampling the outputs to 512 \(\times\) 512 pixels for comparison[24, 25, 26]. To validate generalization ability, the test image is chosen from the validation set that the network was never trained on, and nearest neighbor image scaling method (labeled as Nearest in Table 1) is used as control to validate the up- and down-sampling process. It is clear that none of the existing networks can remove chromatic aberration of metalenses. While some enhanced local details, they failed to improve global color and focus. Quantitative PSNR and SSIM analyses were conducted, and the results are listed in Table 1. It is concluded that our specialized deep learning network significantly outperformed these existing methods designed for generic imagery. This confirms the necessity of tailoring the model to the unique artifacts and distortions in raw metalens images. The customized network architecture and training process are essential to learn the intricacies of correcting meta-optics aberrations computationally.
2304.13812
Guaranteed Quantization Error Computation for Neural Network Model Compression
Neural network model compression techniques can address the computation issue of deep neural networks on embedded devices in industrial systems. The guaranteed output error computation problem for neural network compression with quantization is addressed in this paper. A merged neural network is built from a feedforward neural network and its quantized version to produce the exact output difference between two neural networks. Then, optimization-based methods and reachability analysis methods are applied to the merged neural network to compute the guaranteed quantization error. Finally, a numerical example is proposed to validate the applicability and effectiveness of the proposed approach.
Wesley Cooke, Zihao Mo, Weiming Xiang
2023-04-26T20:21:54Z
http://arxiv.org/abs/2304.13812v1
# Guaranteed Quantization Error Computation for Neural Network Model Compression ###### Abstract Neural network model compression techniques can address the computation issue of deep neural networks on embedded devices in industrial systems. The guaranteed output error computation problem for neural network compression with quantization is addressed in this paper. A merged neural network is built from a feedforward neural network and its quantized version to produce the exact output difference between two neural networks. Then, optimization-based methods and reachability analysis methods are applied to the merged neural network to compute the guaranteed quantization error. Finally, a numerical example is proposed to validate the applicability and effectiveness of the proposed approach. model compression, neural networks, quantization + Footnote †: This research was supported by the National Science Foundation, under NSF CAREER Award 2143351, and NSF CNS Award no. 2223035. ## I Introduction Neural networks have been demonstrated to be powerful and effective tools to solve complex problems such as image processing [1], high-performance adaptive control [2], etc. Due to the increasing complexity of the problems in various applications, the scale and complexity of neural networks also grow exponentially to meet the desired accuracy and performance. Recent progress of machine learning, such as training and using a new generation of large neural networks, heavily depends on the availability of exceptionally large computational resources, e.g., the Transformer model with neural architecture search proposed in [3], if trained from scratch for each case, requires 274,120 hours training on 8 NVIDIA P100 GPUs [4]. Additionally, even for already trained neural networks, the verification process is also quite time- and resource-consuming, e.g., some simple properties in the ACAS Xu neural network of a 5-layer simple structure proposed in [5] need more than 100 hours to be verified. To avoid unaffordable computation when using neural networks, a variety of neural network acceleration and compression methods are proposed such as neural network pruning and quantization, which can significantly reduce the size and memory footprint of neural networks as well as expedite the speed of model inference. Quantization as a reduction method is mainly concerned with the amount of memory utilized for the learnable parameters of a neural network. The data type for the weights and biases of a typical neural network is usually expressed as 32-bit floating-point values that will carry out millions of floating-point operations during inference time. Quantization aims to shrink the memory footprint of deep neural networks by reducing the number of bits used to store the values for the learnable parameters and activations. This is not only ideal for application scenarios where memory resources may be restricted such as embedded systems or microcontroller environments, but with the selected weight representation you could potentially facilitate faster inference using cheaper arithmetic operations [6]. With the reduction in parameter bit precision, however, it is typical that a quantized neural network will perform worse in terms of accuracy than its non-quantized counterpart using gradient-based learning methods. However, these drops in accuracy are usually considered minimal and worth the given benefit in memory reduction and inference speed-up. There exists much literature describing various techniques of quantization and successful results thereof, including works utilizing stochastic rounding to select weight values beneficial to gradient training [7] and applications on modern deep architectures [8], as well as quantization methods that reduce the number of multiplication operations required during training time [9]. Significant research has also been done on quantization-aware training methods [10], where the loss in accuracy due to bit precision reduction is minimized. Some of these quantization-aware training methods utilize a straight-through gradient estimator (STE) [11] to more appropriately select weights during network training that minimizes accuracy and further reduces computational burden [12]. As quantization methods are used for neural network reduction, there inevitably exist discrepancies between the performances of original and compressed neural networks. In this work, we propose a computationally tractable approach to compute the guaranteed output error caused by quantization. A merged neural network is constructed to generate the output differences between two neural networks, and then reachability analysis on the merged neural network can be performed to obtain the guaranteed error. The remainder of the paper is organized as follows: Preliminaries are given in Section II. The main results on quantization error computation are presented in Section III. A numerical example is given in Section IV. The conclusion is presented in Section V. ## II Preliminaries In this work, we consider a class of fully-connected feedforward neural networks which can be described by the following recursive equations \[\begin{cases}\mathbf{u}_{0}=\mathbf{u}\\ \mathbf{u}_{\ell}=\phi_{\ell}(\mathbf{W}_{\ell}\mathbf{u}_{\ell-1}+\mathbf{b}_{ \ell}),\ \ell=1,\ldots,L\\ \mathbf{y}=\mathbf{u}_{L}\end{cases} \tag{1}\] where \(\mathbf{u}_{0}=\mathbf{u}\in\mathbb{R}^{n_{u}}\) is the input vector of the neural network, \(\mathbf{y}=\mathbf{u}_{L}\in\mathbb{R}^{n_{y}}\) is the output vector of the neural network, \(\mathbf{W}_{\ell}\in\mathbb{R}^{n_{\ell}\times n_{\ell-1}}\) and \(\mathbf{b}_{\ell}\in\mathbb{R}^{n_{\ell}}\) are weight matrices and bias vectors for the \(\ell\)-th layer, respectively. \(\phi_{\ell}=[\psi_{\ell},\cdots,\psi_{\ell}]\) is the concatenation of activation functions of the \(\ell\)-th layer in which \(\psi_{\ell}:\mathbb{R}\rightarrow\mathbb{R}\) is the activation function, e.g., such as logistic, tanh, ReLU, sigmoid functions. In addition, the input-output mapping of the above neural network \(\Phi:\mathbb{R}^{n_{u}}\rightarrow\mathbb{R}^{n_{y}}\) is denoted in the form of \[\mathbf{y}=\Phi(\mathbf{u}) \tag{2}\] where \(\mathbf{u}\in\mathbb{R}^{n_{u}}\) and \(\mathbf{y}\in\mathbb{R}^{n_{y}}\) are the input and output of the neural network, respectively. A common quantization procedure \(\mathsf{Q}(\cdot)\) to map a floating point value \(r\) to an integer can be formulated as what follows \[\mathsf{Q}(r)=\mathsf{int}(r/S)-Z \tag{3}\] where \(S\) is a floating point value as a scaling factor, and \(Z\) is an integer value that represents \(0\) in the quantization policy which could be \(0\) or other values. The \(\mathsf{int}:\mathbb{R}\rightarrow\mathbb{Z}\) is the function rounding the floating point value of an integer. To reduce the size and complexity of the neural network, the quantization procedure is implemented on the neural network parameters, i.e., weights and biases. The quantized version of the neural network (1) is in the form of \[\begin{cases}\mathbf{u}_{0}=\mathbf{u}\\ \mathbf{u}_{\ell}=\phi_{\ell}(\mathsf{Q}(\mathbf{W}_{\ell})\mathbf{u}_{\ell-1 }+\mathsf{Q}(\mathbf{b}_{\ell})),\ \ell=1,\ldots,L\\ \mathbf{y}=\mathbf{u}_{L}\end{cases} \tag{4}\] where \(\mathsf{Q}(\mathbf{W}_{\ell})\in\mathbb{Z}^{n_{\ell}\times n_{\ell-1}}\) and \(\mathsf{Q}(\mathbf{b}_{\ell})\in\mathbb{Z}^{n_{\ell}}\) are the quantized weight matrices and bias vectors for the \(\ell\)-th layer under the quantization process \(\mathsf{Q}(\cdot)\). Furthermore, the quantized version of neural network \(\Phi:\mathbb{R}^{n_{u}}\rightarrow\mathbb{R}^{n_{y}}\) is expressed as \(\Phi_{\mathsf{Q}}:\mathbb{Z}^{n_{u}}\rightarrow\mathbb{Z}^{n_{y}}\) in the form of \[\mathbf{y}=\Phi_{\mathsf{Q}}(\mathbf{u}). \tag{5}\] The quantization can significantly reduce the size and computational complexity of a neural network, e.g., mapping the 32-bit floating point representation to an 8-bit integer representation, leading to smaller models that can fit in hardware with high computational efficiency. However, the price to pay is the loss of performance and precision post-quantization. To formally characterize the performance loss caused by quantization, a reasonable expectation is to compute the quantization error between the neural network and its quantized version. **Definition 1**: _Given a tuple \(\mathbb{M}\triangleq\langle\Phi,\mathsf{Q},\mathcal{U}\rangle\) where \(\Phi\) is a neural network defined by (1), \(\mathsf{Q}\) is the quantization process of (3) producing quantized neural network \(\Phi_{\mathsf{Q}}\), and \(\mathcal{U}\in\mathbb{R}^{n_{u}}\) is a compact input set, the guaranteed quantization error is defined by_ \[\rho(\mathbb{M})=\sup_{\mathbf{u}\in\mathcal{U}}\|\Phi(\mathbf{u})-\Phi_{ \mathsf{Q}}(\mathbf{u})\| \tag{6}\] _where \(\Phi_{\mathsf{Q}}\) is the quantized neural network of \(\Phi\)._ **Remark 1**: _The assumption that the input set \(\mathcal{U}\) is a compact set is reasonable since neural networks are rarely applied to raw data sets. Instead, standardization and rescaling techniques such as normalization are used which ensure the inputs are always within a compact set such as \([0,1]\) or \([-1,1]\). Given the compact input set \(\mathcal{U}\) which contains all possible input to the neural network, the guaranteed quantization error \(\rho(\mathbb{M})\) characterizes the upper bound for the difference between the outputs of neural network \(\Phi\) and its quantized version \(\Phi_{\mathsf{Q}}\) generated from the same inputs in set \(\mathcal{U}\), which quantifies the discrepancy caused by the quantization process \(\mathsf{Q}\) in terms of outputs._ ## III Quantization Error Computation To address the quantization error computation problem, the key is to estimate a \(\gamma>0\) such that \(\rho(\mathbb{M})\leq\gamma\). Due to the complexity of the neural network, it is challenging to estimate the \(\gamma\) directly from the discrepancy of \(\Phi(\mathbf{u})-\Phi_{\mathsf{Q}}(\mathbf{u})\). Other than directly analyzing the discrepancy of two neural networks, we proposed to construct a new fully-connected neural network \(\tilde{\Phi}\) merged from \(\Phi\) and \(\Phi_{\mathsf{Q}}\) which is expected to produce the discrepancy of the outputs of two neural networks, i.e., \(\tilde{\Phi}(\mathbf{u})=\Phi(\mathbf{u})-\Phi_{\mathsf{Q}}(\mathbf{u})\), and then search for the upper bound of the outputs of the merged neural network. Given \(L\)-layer neural network \(\Phi\) and quantized \(\Phi_{\mathsf{Q}}\), the merged neural network \(\tilde{\Phi}:\mathbb{R}^{n_{u}}\rightarrow\mathbb{R}^{n_{y}}\) is constructed with \(L+1\) layers as follows: \[\begin{cases}\tilde{\mathbf{u}}_{0}=\mathbf{u}\\ \tilde{\mathbf{u}}_{\ell}=\tilde{\phi}_{\ell}(\tilde{\mathbf{W}}_{\ell}\tilde{ \mathbf{u}}_{\ell-1}+\tilde{\mathbf{b}}_{\ell}),\ \ell=1,\ldots,L+1\\ \tilde{\mathbf{y}}=\tilde{\mathbf{u}}_{L+1}\end{cases} \tag{7}\] where \[\tilde{\mathbf{W}}_{\ell}=\begin{cases}\begin{bmatrix}\mathbf{W}_{1}\\ \mathsf{Q}(\mathbf{W}_{1})\end{bmatrix},&\ell=1\\ \begin{bmatrix}\mathbf{W}_{\ell}&\mathbf{0}_{n_{\ell}\times n_{\ell-1}}\\ \mathbf{0}_{n_{\ell-1}\times n_{\ell}}&\mathsf{Q}(\mathbf{W}_{\ell})\end{bmatrix},&1<\ell\leq L\\ \begin{bmatrix}\mathbf{I}_{n_{y}}&\mathsf{-I}_{n_{y}}\end{bmatrix},&\ell=L+1\\ \end{cases} \tag{8}\] \[\tilde{\mathbf{b}}_{\ell}=\begin{cases}\begin{bmatrix}\mathbf{b}_{\ell}\\ \mathsf{Q}(\mathbf{b}_{\ell})\\ \end{bmatrix},&1\leq\ell\leq L\\ \begin{bmatrix}\mathbf{0}_{2n_{y}\times 1}\end{bmatrix},&\ell=L+1\end{cases} \tag{9}\] \[\tilde{\phi}_{\ell}(\cdot)=\begin{cases}\phi_{\ell}(\cdot),&1\leq\ell\leq L\\ \mathsf{L}(\cdot),&\ell=L+1\end{cases} \tag{10}\] where \(\mathsf{L}(\cdot)\) is linear transfer function, i.e., \(x=\mathsf{L}(x)\). **Theorem 1**: _Given a tuple \(\mathbb{M}\triangleq\langle\Phi,\mathsf{Q},\mathcal{U}\rangle\) where \(\Phi\) is a neural network defined by (1), \(\mathsf{Q}\) is the quantization process of (3), and \(\mathcal{U}\in\mathbb{R}^{n_{u}}\) is a compact input set, the guaranteed quantization error \(\rho(\mathbb{M})\) can be computed by_ \[\rho(\mathbb{M})=\sup_{\mathbf{u}\in\mathcal{U}}\left\|\tilde{\Phi}(\mathbf{u})\right\| \tag{11}\] _where \(\tilde{\Phi}\) is a fully-connected neural network defined in (7)._ _Proof_. First, let us consider \(\ell=1\). Given an input \(\tilde{\mathbf{u}}_{0}=\mathbf{u}\in\mathbb{R}^{n_{u}}\), one can obtain that \[\tilde{\mathbf{u}}_{1}=\tilde{\phi}_{1}(\tilde{\mathbf{W}}_{1}\tilde{\mathbf{u }}_{0}+\tilde{\mathbf{b}}_{1})=\begin{bmatrix}\phi_{1}(\mathbf{W}_{1}\tilde{ \mathbf{u}}_{0}+\mathbf{b}_{1})\\ \phi_{1}(\mathsf{Q}(\mathbf{W}_{1})\tilde{\mathbf{u}}_{0}+\mathsf{Q}(\mathbf{b} _{1}))\end{bmatrix}. \tag{12}\] Then, we consider \(1<\ell\leq L\). Starting from \(\ell=2\), we have \[\tilde{\mathbf{W}}_{2}\tilde{\mathbf{u}}_{1} =\begin{bmatrix}\mathbf{W}_{2}&\mathbf{0}_{n_{2}\times n_{1}}\\ \mathbf{0}_{n_{2}\times n_{1}}&\mathsf{Q}(\mathbf{W}_{2})\end{bmatrix} \begin{bmatrix}\phi_{1}(\mathbf{W}_{1}\tilde{\mathbf{u}}_{0}+\mathbf{b}_{1})\\ \phi_{1}(\mathsf{Q}(\mathbf{W}_{1})\tilde{\mathbf{u}}_{0}+\mathsf{Q}(\mathbf{b }_{1}))\end{bmatrix}\] \[=\begin{bmatrix}\mathbf{W}_{2}\phi_{1}(\mathbf{W}_{1}\tilde{ \mathbf{u}}_{0}+\mathbf{b}_{1})\\ \mathsf{Q}(\mathbf{W}_{2})\phi_{1}(\mathsf{Q}(\mathbf{W}_{1})\tilde{\mathbf{u}} _{0}+\mathsf{Q}(\mathbf{b}_{1}))\end{bmatrix}\] . Furthermore, it leads to \[\tilde{\mathbf{u}}_{2} =\tilde{\phi}_{2}(\tilde{\mathbf{W}}_{2}\tilde{\mathbf{u}}_{1}+ \tilde{\mathbf{b}}_{2})\] \[=\begin{bmatrix}\phi_{2}(\mathsf{W}_{2}\phi_{1}(\mathbf{W}_{1} \tilde{\mathbf{u}}_{0}+\mathbf{b}_{1})+\mathbf{b}_{2})\\ \phi_{2}(\mathsf{Q}(\mathbf{W}_{2})\phi_{1}(\mathsf{Q}(\mathbf{W}_{1})\tilde{ \mathbf{u}}_{0}+\mathsf{Q}(\mathbf{b}_{1}))+\mathbf{b}_{2})\end{bmatrix}.\] Iterating the above process from \(\ell=2\) to \(\ell=L\), the following recursive equation can be derived \[\tilde{\mathbf{u}}_{\ell}=\tilde{\phi}_{\ell}(\tilde{\mathbf{W}}_{\ell} \tilde{\mathbf{u}}_{\ell-1}+\tilde{\mathbf{b}}_{\ell})=\begin{bmatrix}\phi_{ \ell}(\mathbf{W}_{\ell}\tilde{\mathbf{u}}_{\ell-1}+\mathbf{b}_{\ell})\\ \phi_{\ell}(\mathsf{Q}(\mathbf{W}_{\ell})\tilde{\mathbf{u}}_{\ell-1}+\mathsf{Q }(\mathbf{b}_{\ell}))\end{bmatrix}\] where \(\ell=2,\ldots,L\). Together with (12) when \(\ell=1\), it yields that \[\tilde{\mathbf{u}}_{L}=\begin{bmatrix}\Phi(\mathbf{u})\\ \Phi_{\mathsf{Q}}(\mathbf{u})\end{bmatrix}. \tag{13}\] Furthermore, when considering the last layer \(\ell=L+1\), the following result can be obtained \[\tilde{\mathbf{u}}_{L+1}=\mathsf{L}\left(\begin{bmatrix}\mathbf{I}_{n_{y}}&- \mathbf{I}_{n_{y}}\end{bmatrix}\begin{bmatrix}\Phi(\mathbf{u})\\ \Phi_{\mathsf{Q}}(\mathbf{u})\end{bmatrix}\right)=\Phi(\mathbf{u})-\Phi_{ \mathsf{Q}}(\mathbf{u}) \tag{14}\] where means \(\tilde{\Phi}(\mathbf{u})=\Phi(\mathbf{u})-\Phi_{\mathsf{Q}}(\mathbf{u})\). Based on the definition of guaranteed quantization error \(\rho(\mathbb{M})\), i.e., Definition 1, we can conclude that \[\rho(\mathbb{M})=\sup_{\mathbf{u}\in\mathcal{U}}\left\|\Phi(\mathbf{u})-\Phi_{ \mathsf{Q}}(\mathbf{u})\right\|=\sup_{\mathbf{u}\in\mathcal{U}}\left\|\tilde{ \Phi}(\mathbf{u})\right\|. \tag{15}\] The proof is complete. \(\square\) **Remark 2**: _Theorem 1 implies that we can analyze the merged neural network \(\tilde{\Phi}\) to compute the quantization error between neural network \(\Phi\) and its quantized version \(\Phi_{\mathsf{Q}}\). This result facilitates the computation process by employing those analyzing tools, such as optimization and reachability analysis tools, for merged neural network \(\tilde{\Phi}\)._ * _Using the interval arithmetic for neural network, we can employ Moore-Skelboe Algorithm_ _[_13_]_ _to search upper bound of_ \(||\tilde{\Phi}(\mathbf{u})||\) _subject to_ \(\mathbf{u}\in\mathcal{U}\) _where_ \(\mathcal{U}\) _is a compact set. The key to implement Moore-Skelboe Algorithm is to construct the interval extension of_ \([\tilde{\Phi}]:\mathbbm{R}^{n_{u}}\rightarrow\mathbbm{R}^{n_{y}}\)_. First, from Theorem 1 in_ _[_14_]_ _under the assumption that activation functions are monotonically increasing, the interval extension of merged neural network_ \([\tilde{\Phi}]\) _can be constructed as_ \[[\tilde{\Phi}]=[\tilde{\Phi}^{-},\tilde{\Phi}^{+}]\] (16) _where_ \(\tilde{\Phi}^{-}\) _and_ \(\tilde{\Phi}^{+}\) _are left (limit inferior) and right (limit superior) bounds of interval_ \([\tilde{\Phi}]\) _that are defined as follows_ \[\tilde{\Phi}^{-}:\begin{cases}\tilde{\mathbf{u}}_{0}^{-}&=\mathbf{u}^{-}\\ \tilde{\mathbf{u}}_{\ell}^{-}&=\tilde{\phi}_{\ell}\left(\begin{bmatrix}\tilde{ \mathbf{W}}_{\ell}^{-}&\tilde{\mathbf{W}}_{\ell}^{+}\end{bmatrix}\begin{bmatrix} \tilde{\mathbf{u}}_{\ell-1}^{+}\\ \tilde{\mathbf{u}}_{\ell-1}^{-}\end{bmatrix}+\tilde{\mathbf{b}}_{\ell}\right)\\ \tilde{\mathbf{y}}^{-}&=\tilde{\mathbf{u}}_{L+1}^{-}\end{cases}\] \[\tilde{\Phi}^{+}:\begin{cases}\tilde{\mathbf{u}}_{0}^{+}&=\mathbf{u}^{+} \\ \mathbf{u}_{\ell}^{+}&=\tilde{\phi}_{\ell}\left(\begin{bmatrix}\tilde{\mathbf{W}}_{ \ell}^{-}&\tilde{\mathbf{W}}_{\ell}^{+}\end{bmatrix}\begin{bmatrix}\tilde{ \mathbf{u}}_{\ell-1}^{-}\\ \tilde{\mathbf{u}}_{\ell-1}^{+}\end{bmatrix}+\tilde{\mathbf{b}}_{\ell}\right)\\ \tilde{\mathbf{y}}^{+}&=\tilde{\mathbf{u}}_{L+1}^{+}\end{cases}\] _in which_ \(\mathcal{U}\subseteq[\mathbf{u}]=[\mathbf{u}^{-},\mathbf{u}^{+}]\)_, and_ (17) _with_ \(w_{\ell}^{i,j}\)_,_ \(\underline{w}_{\ell}^{i,j}\)_, and_ \(\overline{w}_{\ell}^{i,j}\) _being the elements in_ \(i\)_-th row and_ \(j\)_-th column of matrix_ \(\mathbf{W}_{\ell}\)_,_ \(\mathbf{W}_{\ell}^{-}\)_, and_ \(\mathbf{W}_{\ell}^{+}\)_. With the above tractable calculation of_ \(\tilde{\Phi}^{-}\) _and_ \(\tilde{\Phi}^{+}\)_, we can perform Moore-Skelboe Algorithm to compute guaranteed quantization error_ \(\rho(\mathbb{M})\)_._ * _Under the framework of reachability analysis of neural networks, the guaranteed quantization error computation problem can be turned into a reachable set computation problem for merged neural network_ \(\tilde{\Phi}\)_. Given the input set_ \(\mathcal{U}\)_, the following set_ \[\mathcal{Y}=\left\{\tilde{\mathbf{y}}\in\mathbb{R}^{n_{y}}\mid\tilde{\mathbf{y}}= \tilde{\Phi}(\mathbf{u}),\ \mathbf{u}\in\mathcal{U}\right\}\] (19) _is called the output set of neural network (_1_). The guaranteed quantization error_ \(\rho(\mathbb{M})\) _can be obtained by_ \[\rho(\mathbb{M})=\max\{\mathbf{y}\mid\tilde{\mathbf{y}}\in\mathcal{Y}\}.\] (20) _The key step is the computation for the reachable set \(\mathcal{Y}\). This can be efficiently done through neural network reachability analysis. There exist a number of verification tools for neural networks available for the reachable set computation. The neural network reachability analysis tool can produce the reachable set \(\mathcal{Y}\) in the form of a union of polyhedral sets such as NNV [15], veritex [16], etc. The IGNNV tool computes the reachable set \(\mathcal{Y}\) as a union of interval sets [17, 14]. With the reachable set \(\mathcal{Y}\), the guaranteed quantization error \(\rho(\mathbb{M})\) can be easily obtained by searching for the maximal value of \(\|\tilde{\mathbf{y}}\|\) in \(\mathcal{Y}\), e.g., testing throughout a finite number of vertices in the interval or polyhedral sets. ## IV Numerical Example To verify the effectiveness of the quantization error computation, a numerical example is used. First, a large neural network, \(\Phi\), is generated such that it has a 1-D input layer, three hidden layers with 50 neurons in each layer, and a 1-D output layer. Each layer has an activation function of ReLU except for the output layer with a linear function. The weights and biases were randomly initialized, but with the condition that they were normally distributed with a mean of zero and a standard deviation of one. After generating \(\Phi\), a quantization method was applied to reduce the size of the weights and biases, and to introduce a slight reduction in accuracy. This quantized network is called \(\Phi_{\text{Q}}\). While there exist several quantization methods and tools to quantize networks, a basic technique that truncates the weights and biases to 4 decimal places is used in this numerical example. Next, a merged network \(\tilde{\Phi}\) was constructed from \(\Phi\) and \(\Phi_{\text{Q}}\) according to (7). The vertex neural network reachability tool [16] computed the reachable output set of \(\tilde{\Phi}\) given the input interval normalized as \([0,1]\). Finally, the quantization error was obtained using (20) which is \(\rho(\mathbb{M})=0.5008\). Using this error, the lower and upper bound can be constructed using the following formula: \(\Phi(u)\pm\rho(\mathbb{M})\) as shown in Fig. 1. Please note that this quantization error is very small compared with the range of outputs. Thus, to provide more detail, the figure has been zoomed into an appropriate scale. Moreover, the memory sizes of models are shown in Table I. ## V Conclusions This paper addressed the guaranteed output error computation problem for neural network compression with quantization. Based on the original neural network and its compressed version resulted from quantization, a merged neural network computation framework is developed, which can utilize optimization-based methods and reachability analysis methods to compute the guaranteed quantization error. At last, numerical examples are proposed to validate the applicability and effectiveness of the proposed approach. Future work will be expanded to include more complex and various neural network architectures such as convolutional neural networks.
2310.00729
Spectral Neural Networks: Approximation Theory and Optimization Landscape
There is a large variety of machine learning methodologies that are based on the extraction of spectral geometric information from data. However, the implementations of many of these methods often depend on traditional eigensolvers, which present limitations when applied in practical online big data scenarios. To address some of these challenges, researchers have proposed different strategies for training neural networks as alternatives to traditional eigensolvers, with one such approach known as Spectral Neural Network (SNN). In this paper, we investigate key theoretical aspects of SNN. First, we present quantitative insights into the tradeoff between the number of neurons and the amount of spectral geometric information a neural network learns. Second, we initiate a theoretical exploration of the optimization landscape of SNN's objective to shed light on the training dynamics of SNN. Unlike typical studies of convergence to global solutions of NN training dynamics, SNN presents an additional complexity due to its non-convex ambient loss function.
Chenghui Li, Rishi Sonthalia, Nicolas Garcia Trillos
2023-10-01T17:03:47Z
http://arxiv.org/abs/2310.00729v1
# Spectral neural networks: approximation theory and optimization landscape ###### Abstract. There is a large variety of machine learning methodologies that are based on the extraction of spectral geometric information from data. However, the implementations of many of these methods often depend on traditional eigensolvers, which present limitations when applied in practical online big data scenarios. To address some of these challenges, researchers have proposed different strategies for training neural networks as alternatives to traditional eigensolvers, with one such approach known as Spectral Neural Network (SNN). In this paper, we investigate key theoretical aspects of SNN. First, we present quantitative insights into the tradeoff between the number of neurons and the amount of spectral geometric information a neural network learns. Second, we initiate a theoretical exploration of the optimization landscape of SNN's objective to shed light on the training dynamics of SNN. Unlike typical studies of convergence to global solutions of NN training dynamics, SNN presents an additional complexity due to its non-convex ambient loss function. **Acknowledgements:** The authors would like to thank Abiy Tasissa and Yuetian Luo for enlightening discussions on the topics covered in this paper. This material is based upon work supported by the National Science Foundation under Grant Number DMS 1641020 and was started during the summer of 2022 when the authors participated in the AMS-MRC program: _Data Science at the Crossroads of Analysis, Geometry, and Topology._ NGT was supported by the NSF grants DMS-2005797 and DMS-2236447. CL and NGT would like to thank the IPDS at UW-Madison and NSF through TRIPODS grant 2023239 for their support. Introduction The study of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of the influence of influence of the where \(\|x-y\|\) denotes the Euclidean distance between \(x\) and \(y\), \(\varepsilon\) is a proximity parameter, and \(\eta\) is a decreasing, non-negative function. In short, \(\mathbf{G}^{\varepsilon}\) measures the similarity between points according to their proximity. From \(\mathbf{G}^{\varepsilon}\) we define the adjacency matrix \(\mathcal{A}_{\mathbf{n}}\) appearing in Equation 1.1 by \[\mathcal{A}_{\mathbf{n}}\stackrel{{\mathrm{def}}}{{=}}\mathbf{D} _{\mathbf{G}}^{-\frac{1}{2}}\mathbf{G}\mathbf{D}_{\mathbf{G}}^{-\frac{1}{2}}+a\mathbf{I}, \tag{1.3}\] where \(\mathbf{D}_{\mathbf{G}}\) is the degree matrix associated to \(\mathbf{G}\) and \(a>1\) is a fixed quantity. Here we distance ourselves slightly from the choice made in the original SNN paper HaoChen et al. (2021), where \(\mathcal{A}_{\mathbf{n}}\) is taken to be \(\mathbf{G}\) itself, and instead consider a normalized version. This is due to the following key properties satisfied by our choice of \(\mathcal{A}_{\mathbf{n}}\) (see also Remark D.1 in Appendix D) that make it more suitable for theoretical analysis. **Proposition 1**.: _The matrix \(\mathcal{A}_{\mathbf{n}}\) defined in Equation 1.1 satisfies the following properties:_ 1. \(\mathcal{A}_{\mathbf{n}}\) _is symmetric positive definite._ 2. \(\mathcal{A}_{\mathbf{n}}\) _'s_ \(r\) _top eigenvectors (the ones corresponding to the_ \(r\) _largest eigenvalues) coincide with the eigenvectors of the_ \(r\) _smallest eigenvalues of the symmetric normalized graph Laplacian matrix (see Von Luxburg (2007)):_ (1.4) \[\Delta_{n}\stackrel{{\mathrm{def}}}{{=}}\mathbf{I}-\mathbf{D}_{ \mathbf{G}}^{-1/2}\mathbf{G}\mathbf{D}_{\mathbf{G}}^{-1/2}.\] The above two properties, proved in Appendix D, are useful when combined with recent results on the regularity of graph Laplacian eigenvectors over proximity graphs Calder et al. (2022) (see Appendix E.1) and some results on the approximation of Lipschitz functions on manifolds using neural networks Chen et al. (2022) (see Appendix E.2). In particular, we answer question Q1, which belongs to the realm of approximation theory, by providing a concrete bound on the number of neurons in a multi-layer ReLU NN that are necessary to approximate the \(r\) smallest eigenvectors of the normalized graph Laplacian matrix \(\Delta_{n}\) (as defined in 1.4) and thus also the \(r\) largest eigenvectors of \(\mathcal{A}_{\mathbf{n}}\); this is the content of Theorem 2.1. While our answer to question Q1 addresses the existence of a neural network approximating the spectrum of \(\mathcal{A}_{\mathbf{n}}\), it does not provide a _constructive_ way to find one such approximation. We thus address question Q2 and prove that an approximating NN can be constructed by solving the optimization problem 1.1, i.e., by finding a global minimizer of SNN's objective function. A precise statement can be found in Theorem 2.2. To prove this theorem, we rely on our estimates in Theorem 2.1 and on some auxiliary computations involving a global optimizer \(\mathbf{Y}^{*}\) of the "ambient space problem": \[\min_{\mathbf{Y}\in\mathbb{R}^{n\times r}}\ \ell(\mathbf{Y}). \tag{1.5}\] For that we also make use of property 1 in Proposition 1, which allows us to guarantee, thanks to the Eckart-Young-Mirsky theorem (see Eckart & Young (1936) ), that solutions \(\mathbf{Y}\) to Equation 1.5 coincide, up to multiplication on the right by a \(r\times r\) orthogonal matrix, with a \(n\times r\) matrix \(\mathbf{Y}^{*}\) whose columns are scaled versions of the top \(r\) normalized eigenvectors of the matrix \(\mathcal{A}_{\mathbf{n}}\); see a detailed description of \(\mathbf{Y}^{*}\) in Appendix D.2. After discussing our spectral approximation results, we move on to discussing question Q3, which is related to the hardness of optimization problem 1.1. Notice that, while \(\mathbf{Y}_{\theta^{*}}\) is a good approximator for \(\mathcal{A}_{\mathbf{n}}\)'s spectrum according to our theory, it is unclear whether \(\theta^{*}\) can be reached through a standard training scheme. In fact, question Q3, as stated, is a challenging problem. This is not only due to the non-linearities in the neural network, but also because, in contrast to more standard theoretical studies of training dynamics of over-parameterized NNs (e.g., Chizat & Bach (2018), Wojtowytsch (2020)), the spectral contrastive loss function \(\ell\) is non-convex in the "ambient space" variable \(\mathbf{Y}\). Despite this additional difficulty, numerical experiments --see Figure 3 for an illustration-- suggest that first order optimization methods can find global solutions to Equation 1.1, and our goal here is to take a first step in the objective of understanding this behavior mathematically. Figure 4. (a) and (b) Sum of the norms of the gradients for a two-layer ReLU Neural Network. In (a), the network is initialized near the global optimal solution and in (b) the network is initialized near a saddle point. (c) shows the distance between the current outputs of the neural network and the optimal solution for the case when it was initialized near a saddle point. More details are presented in Appendix B.2. Figure 3. (B) shows the first eigenvector for the Laplacian of a proximity graph from data points sampled from \(S^{2}\) obtained using an eigensolver. (A) shows the same eigenvector but obtained using SNN. The difference between the two figures is minor, showing that the neural network learns the eigenvector of the graph Laplacian well. See details in Appendix B.1. To begin, we present some numerical experiments where we consider different initializations for the training of SNN. Here we take 100 data points from MNIST and let \(\mathcal{A}_{n}\) be the \(n\times n\) gram matrix for the data points for simplicity. We remark that while we care about a \(\mathcal{A}_{n}\) with a specific form for our approximation theory results, our analysis of the loss landscape described below holds for an arbitrary positive semi-definite matrix. In Figure 4, we plot the norm of the gradient during training when initialized in two different regions of parameter space. Concretely, in a region of parameters for which \(\mathbf{Y}_{\theta}\) is close to a solution \(\mathbf{Y}^{*}\) to problem 1.5 and a region of parameters for which \(\mathbf{Y}_{\theta}\) is close to a saddle point of the ambient loss \(\ell\). We compare these plots to the ones we produce from the gradient descent dynamics for the ambient problem 1.5, which are shown in Figure 5. We notice a similar qualitative behavior with the training dynamics of the NN, suggesting that the landscape of problem 1.1, if the NN is properly overparameterized, inherits properties of the landscape of \(\ell\). Motivated by the previous observation, in section 3 we provide a careful landscape analysis of the loss function \(\ell\) introduced in Equation 1.1. We deem this landscape to be "benign", in the sense that it can be fully covered by the union of three regions described as follows: 1) the region of points close to global optimizers of Equation 1.5, where one can prove (Riemannian) strong convexity under a suitable quotient geometry; 2) the region of points close to saddle points, where one can find escape directions; and, finally, 3) the region where the gradient of \(\ell\) is large. Points in these regions are illustrated in Figures 4(b) and 4(c). The relevance of this global landscape characterization is that it implies convergence of most first-order optimization methods, or slight modifications thereof, toward global minimizers of the ambient space problem 1.5. This characterization is suggestive of analogous properties for the NN training problem in an overparameterized regime, but a full theoretical analysis of this is left as an open problem. In summary, the main contributions of our work are the following: * We show that we can approximate the eigenvectors of a large adjacency matrix with a NN, provided that the NN has sufficiently many neurons; see Theorem 2.1. Moreover, we show that by solving 1.1 one can _construct_ such approximation provided the parameter space of the NN is rich enough; see Theorem 2.2. Figure 5. Norms of the gradients for the ambient problem and the distance to the optimal solution. In (a), \(\mathbf{Y}\) is initialized near the global optimal solution, and in (b) \(\mathbf{Y}\) is initialized near a saddle point. c) shows the distance between \(\mathbf{Y}\) and the optimal solution for the case when it was initialized near a saddle point. * We provide precise error bounds for the approximation of eigenfunctions of a Laplace-Beltrami operator with NNs; see Corollary 1. In this way, we present an example of a setting where we can rigorously quantify the error of approximation of a solution to a PDE on a manifold with NNs. * Motivated by numerical evidence, we begin an exploration of the optimization landscape of SNN and in particular provide a full description of SNN's associated ambient space optimization landscape. This landscape is shown to be benign; see discussion in Section 3. ### Related work Spectral clustering and manifold learning.Several works have attempted to establish precise mathematical connections between the spectra of graph Laplacian operators over proximity graphs and the spectrum of weighted Laplace-Beltrami operators over manifolds. Some examples include Tao and Shi (2020), Burago et al. (2014), Garcia Trillos et al. (2020), Lu (2022), Calder and Garcia Trillos (2022), Calder et al. (2022), Dunson et al. (2021), Wormell and Reich (2021). In this paper we use adaptations of the results in Calder et al. (2022) to infer that, with very high probability, the eigenvectors of the normalized graph Laplacian matrix \(\Delta_{n}\) defined in Equation 1.4 are essentially Lipschitz continuous functions. These regularity estimates are one of the crucial tools for proving our Theorem 2.1. Contrastive Learning.Contrastive learning is a self-supervised learning technique that has gained considerable attention in recent years due to its success in computer vision, natural language processing, and speech recognition Chen et al. (2020), Chen et al. (2020), Chen et al. (2020), He et al. (2020), Theoretical properties of contrastive representation learning were first studied by Arora et al. (2019), Tosh et al. (2021), Lee et al. (2021) where they assumed conditional independence. HaoChen et al. (2021) relaxes the conditional independence assumption by imposing the manifold assumption. With the spectral contrastive loss Equation 1.1 crucially in use, HaoChen et al. (2021) provides an error bound for downstream tasks. In this work, we analyze how the neural network can approximate and optimize the spectral loss function Equation 1.1, which is the pertaining step of HaoChen et al. (2021). Neural Network Approximations.Given a function \(f\) with certain amount of regularity, many works have studied the tradeoff between width, depth, and total number of neurons needed and the approximation Petersen (2020), Lu et al. (2021). Specifically, Shen et al. (2019) looks at the problem Holder continuous functions on the unit cube, Yarotsky (2018), Shen et al. (2020) for continuous functions on the unit cube, and Petersen (2020), Schmidt-Hieber (2019), HaoChen et al. (2021) consider the case when the function is defined on a manifold. A related area is that of neural network memorization of a finite number of data points Yun et al. (2019). In this paper, we use these results to show that for our specific type of regularity, we can prove similar results. Neural Networks and Partial Differential Equations.Raissi et al. (2019) introduced Physics Inspired Neural Networks as a method for solving PDEs using neural networks. Specifically, Weinan and Yu (2017), Bhatnagar et al. (2019), Raissi et al. (2019) use neural networks to parameterize the solution as use the PDE as the loss function. Other works such as Guo et al. (2016), Zhu and Zabaras (2018), Adler and Oktem (2017), Bhatnagar et al. (2019) use neural networks to parameterize the solution operator on a given mesh on the domain. Finally, we have that eigenfunctions of operators on function spaces have a deep connection to PDEs. Recent works such as Kovachki et al. (2021), Li et al. (2020\(a\),_b_) demonstrate how to learn these operators. In this work we show that we can approximate eigenfunctions to a weighted Laplace-Beltrami operator using neural networks. Shallow Linear Networks and Non-convex Optimization in Linear Algebra Problems.One of the main objects of study is the ambient problem Equation 1.1. This formulation of the problem is related to linear networks. Linear networks are neural networks with identity activation. A variety of prior works have studied many different aspects of shallow linear networks such as the loss landscape and optimization dynamics Baldi & Hornik (1989), Tarmoun et al. (2021_a_), Min et al. (2021), Brechet et al. (2023), and generalization for one layer networks Dobriban & Wager (2018), Hastie et al. (2022), Bartlett et al. (2020), Kausik et al. (2023). Of relevance are also other works in the literature studying optimization problems very closely related to Equation 1.5. For example, in Section 3 in Li & Tang (2017), there is a landscape analysis for problem 1.5 when the matrix \(\mathcal{A}_{\mathbf{n}}\) is assumed to have rank smaller than or equal to \(r\). That setting is typically referred to as overparameterized or exactly parameterized, whereas here our focus is on the underparameterized setting. On the other hand, the case studied in section 3 in Chi et al. (2019) is the simplest case we could consider for our problem and corresponds to \(r=1\). In this simpler case, the non-convexity of the objective is completely due to a sign ambiguity, which makes the analysis more straightforward and the need to introduce quotient geometries less pressing. ## 2. Spectral Approximation with neural networks Through this section we make the following assumption on the generation process of the data \(\mathcal{X}_{n}\). **Assumption 2.1**.: _The points \(x_{1},\ldots,x_{n}\) are assumed to be sampled from a distribution supported on an \(m\)-dimensional manifold \(\mathcal{M}\) that is assumed to be smooth, compact, orientable, connected, and without a boundary. We assume that this sampling distribution has a smooth density \(\rho:\mathcal{M}\to\mathbb{R}_{+}\) with respect to \(\mathcal{M}^{\prime}\)s volume form, and assume that \(\rho\) is bounded away from zero and also bounded above by a constant._ ### Spectral approximation with multilayer ReLU NNs **Theorem 2.1** (Spectral approximation of normalized Laplacians with neural networks).: _Let \(r\in\mathbb{N}\) be fixed. Under Assumptions 2.1, there are constants \(c,C\) that depend on \(\mathcal{M},\rho\), and the embedding dimension \(r\), such that, with probability at least_ \[1-C\varepsilon^{-6m}\exp\left(-cn\varepsilon^{m+4}\right)\] _for every \(\delta\in(0,1)\) there are \(\kappa,L,p,N\) and a ReLU neural network \(f_{\theta}\in\mathcal{F}(r,\kappa,L,p,N)\) (defined in Equation C.2), such that:_ 1. \(\sqrt{n}\|\mathbf{Y}_{\theta}-\mathbf{Y}^{*}\|_{\infty,\infty}\leq C(\delta+ \varepsilon^{2})\)_, and thus also_ \(\|\mathbf{Y}_{\theta}-\mathbf{Y}^{*}\|_{\mathrm{F}}\leq C\sqrt{r}(\delta+ \varepsilon^{2})\) _._ 2. _The depth of the network,_ \(L\)_, satisfies:_ \(L\leq C\left(\log\frac{1}{\delta}+\log d\right)\)_, and its width,_ \(p\)_, satisfies_ \(p\leq C\left(\delta^{-m}+d\right)\)_._ 3. _The number of neurons of the network,_ \(N\)_, satisfies:_ \(N\leq Cr\left(\delta^{-m}\log\frac{1}{\delta}+d\log\frac{1}{\delta}+d\log d\right)\)_, and the range of weights,_ \(\kappa\)_, satisfies_ \(\kappa\leq\frac{C}{n^{1/(2L)}}\)_._ Theorem 2.1 uses regularity properties of graph Laplacian eigenvectors and a NN approximation theory result for functions on manifolds. A summary of important auxiliary results needed to prove Theorem 2.1 is presented in Appendix E and the proof of the theorem itself is presented in Appendix F. **Remark 2.1**.: _Any improvement of the approximations estimates in Chen et al. (2022) can be immediately applied to improve Theorem 2.1. We have relied on the results in Chen et al. (2022) due to the fact that in their estimates the ambient space dimension \(d\) does not appear in any exponent._ So far we have discussed approximations of the eigenvectors of \(\mathcal{A}_{\mathbf{n}}\) (and thus also of \(\Delta_{n}\)) with neural networks, but more can be said about generalization of these NNs. In particular, the NN in our proof of Theorem 2.1 can be shown to approximate eigenfunctions of the weighted Laplace-Beltrami operator \(\Delta_{\rho}\) defined in Appendix E.1. Precisely, we have the following result. **Corollary 1**.: _Under the same setting, notation, and assumptions as in Theorem 2.1, the neural network \(f_{\theta}:\mathbb{R}^{d}\to\mathbb{R}^{r}\) can be chosen to satisfy_ \[\left\|\sqrt{\frac{n}{1+a}}f_{\theta}^{i}-f_{i}\right\|_{L^{\infty}(\mathcal{ M})}\leq C(\delta+\varepsilon),\quad\forall i=1,\ldots,r.\] _In the above, \(f_{\theta}^{1},\ldots,f_{\theta}^{r}\) are the coordinate functions of the vector-valued neural network \(f_{\theta}\), and the functions \(f_{1},\ldots,f_{r}\) are the normalized eigenfunctions of the Laplace-Beltrami operator \(\Delta_{\rho}\) that are associated to \(\Delta_{\rho}\)'s \(r\) smallest eigenvalues._ **Remark 2.2**.: _The \(\varepsilon^{2}\) term that appears in the bound for \(\|\mathbf{Y}_{\theta}-\mathbf{Y}^{*}\|_{\mathrm{F}}\) in Theorem 2.1 cannot be obtained simply from convergence of eigenvectors of \(\Delta_{n}\) toward eigenfunctions of \(\Delta_{\rho}\) in \(L^{\infty}\). It turns out that we need to use a stronger notion of convergence (almost \(C^{0,1}\)) that in particular implies sharper regularity estimates for eigenvectors of \(\Delta_{n}\) (see Corollary 2 in Appendix E.1 and Remark E.2 below it). In turn, the sharper \(\varepsilon^{2}\) term is essential for our proof of Theorem 2.2 below to work; see the discussion starting in Remark E.2._ ### Spectral approximation with global minimizers of SNN's objective After discussing the _existence_ of approximating NNs, we turn our attention to _constructive_ ways to approximate \(\mathbf{Y}^{*}\) using neural networks. We give a precise answer to question Q2. **Theorem 2.2** (Optimizing SNN approximates eigenvectors up to rotation).: _Let \(r\in\mathbb{N}\) be fixed and suppose that \(\Delta_{\rho}\) is such that \(\Delta_{\rho}\) has a spectral gap between its \(r\) and \(r+1\) smallest eigenvalues, i.e., in the notation in Appendix E.1, assume that \(\sigma_{r}^{\mathcal{M}}<\sigma_{r+1}^{\mathcal{M}}.\) For given \(\kappa,L,p,N\) (to be chosen below), let \(f_{\theta^{*}}\) be such that \(f_{\theta^{*}}\in\arg\min_{f_{\theta}\in\mathcal{F}(r,\kappa,L,p,N)}\| \mathbf{Y}_{\theta}\mathbf{Y}_{\theta}^{\dagger}-\mathcal{A}_{\mathbf{n}}\|_ {\mathrm{F}}^{2}.\)_ _Under Assumptions 2.1, there are constants \(c,C\) that depend on \(\mathcal{M},\rho\), and the embedding dimension \(r\), such that, with probability at least \(1-C\varepsilon^{-6m}\exp\left(-cn\varepsilon^{m+4}\right),\) for every \(\tilde{\delta}\in(0,c)\) (i.e., \(\tilde{\delta}\) sufficiently small) and for \(\kappa=\frac{C}{n^{1/(2L)}}\), \(L=C\left(\log\frac{1}{\delta\varepsilon}+\log d\right)\), \(p=C\left((\tilde{\delta}\varepsilon)^{-m}+d\right)\) and \(N=\infty\), we have_ \[\min_{\mathcal{Q}\in\mathbb{Q}_{r}}\|\mathbf{Y}_{\theta^{*}}-\mathbf{Y}^{*} \boldsymbol{O}\|_{\mathrm{F}}\leq C\varepsilon(\tilde{\delta}+\varepsilon). \tag{2.1}\] **Remark 2.3**.: _Equation 2.1 says that \(\mathbf{Y}_{\theta^{*}}\) approximates a minimizer of the ambient problem 1.5 and that \(\mathbf{Y}_{\theta^{*}}\) can be recovered but only up to rotation. This is unavoidable, since the loss function \(\ell\) is invariant under multiplication on the right by a \(r\times r\) orthogonal matrix. On the other hand, to set \(N=\infty\) means we do not enforce sparsity constraints in the optimization of the NN parameters. This is convenient in practical settings and this is the reason why we state the theorem in this way. However, we can also set \(N=r\left((\tilde{\delta}\varepsilon)^{-m}\log\frac{1}{\delta\varepsilon}+d \log\frac{1}{\delta\varepsilon}+d\log d\right)\) without affecting the conclusion of the theorem._ ## 3. Landscape of SNN's Ambient Optimization Problem While in prior sections we considered a specific \(\mathcal{A}_{n}\), the analysis in this section only relies on \(\mathcal{A}_{\mathbf{n}}\) being positive definite with an eigengap between its \(r\)-th and \((r+1)\)th top eigenvalues. We analyze the global optimization landscape of the non-convex Problem 1.5 under a suitable Riemannian _quotient geometry_Absil et al. (2009), Boumal (2023). The need for a quotient geometry comes from the fact that if \(\mathbf{Y}\) is a stationary point of 1.5, then \(\mathbf{Y}\boldsymbol{O}\) is also a stationary point for any \(r\times r\) orthogonal matrix \(\boldsymbol{O}\in\mathbb{O}_{r}\). This implies that the loss function \(\ell\) is non-convex in any neighborhood of a stationary point (Li et al. 2019, Proposition 2). Despite the non-convexity of \(\ell\), we show that under this geometry, Equation 1.5 is geodesically convex in a local neighborhood around the optimal solution. Let \(\overline{\mathcal{N}}_{r+}^{n}\) be the space of \(n\times r\) matrices with full column rank. To define the quotient manifold, we encode the invariance mapping, i.e., \(\mathbf{Y}\to\mathbf{Y}\boldsymbol{O}\), by defining the equivalence classes \([\mathbf{Y}]=\{\mathbf{Y}\boldsymbol{O}:\boldsymbol{O}\in\mathbb{O}_{r}\}\). From Lee (2018), we have \(\mathcal{N}_{r_{+}}^{n}\stackrel{{\text{def}}}{{=}}\overline{ \mathcal{N}}_{r_{+}}^{n}/\mathbb{O}_{r}\) is a quotient manifold of \(\overline{\mathcal{N}}_{r+}^{n}\). See a detailed introduction to Riemannian optimization in Boumal (2023). Since the loss function in 1.5 is invariant along the equivalence classes of \(\overline{\mathcal{N}}_{r_{+}}^{n}\), \(\ell\) induces the following optimization problem on the quotient manifold \(\mathcal{N}_{r_{+}}^{n}\): \[\min_{[\mathbf{Y}]\in\mathcal{N}_{r_{+}}^{n}}H([\mathbf{Y}])\stackrel{{ \text{def}}}{{=}}\frac{1}{2}\left\|\mathbf{Y}\mathbf{Y}^{\top}- \mathcal{A}_{\mathbf{n}}\right\|_{\text{F}}^{2} \tag{3.1}\] To analyze the landscape for Equation 3.1, we need expressions for the Riemannian gradient, the Riemannian Hessian, as well as the geodesic distance \(d\) on this quotient manifold. By Lemma 2 from Luo & Garcia Trillos (2022), we have that \[d\left(\left[\mathbf{Y}_{1}\right],\left[\mathbf{Y}_{2}\right]\right)=\min_{ \mathbf{Q}\in\mathbb{O}_{r}}\left\|\mathbf{Y}_{2}\mathbf{Q}-\mathbf{Y}_{1} \right\|_{\text{F}}\] and from Lemma 3 from Luo & Garcia Trillos (2022), we have that \[\overline{\operatorname{grad}H([\mathbf{Y}])} =2\left(\mathbf{Y}\mathbf{Y}^{\top}-\mathcal{A}_{\mathbf{n}} \right)\mathbf{Y},\] \[\overline{\operatorname{Hess}H([\mathbf{Y}])}\left[\theta_{\mathbf{ Y}},\theta_{\mathbf{Y}}\right] =\left\|\mathbf{Y}\theta_{\mathbf{Y}}^{\top}+\theta_{\mathbf{Y}}\mathbf{Y}^{ \top}\right\|_{\text{F}}^{2}+2\left\langle\mathbf{Y}\mathbf{Y}^{\top}- \mathcal{A}_{\mathbf{n}},\theta_{\mathbf{Y}}\theta_{\mathbf{Y}}^{\top} \right\rangle. \tag{3.2}\] Finally, by the classical theory on low-rank approximation (Eckart-Young-Mirsky theorem Eckart & Young (1936)), \([\mathbf{Y}^{*}]\) is the unique global minimizer of Equation 3.1. Let \(\kappa^{*}=\sigma_{1}\left(\mathbf{Y}^{*}\right)/\sigma_{r}\left(\mathbf{Y}^{*}\right)\) be the condition number of \(\mathbf{Y}^{*}\). Here, \(\sigma_{i}(A)\) is the \(i^{\text{th}}\) largest singular value of \(A\), and \(\|A\|=\sigma_{1}(A)\) is its spectral norm. Our precise assumption on the matrix \(\mathcal{A}_{n}\) for this section is as follows. **Assumption 3.1** (Eigengap).: \(\sigma_{r+1}(\mathcal{A}_{\mathbf{n}})\) _is strictly smaller than \(\sigma_{r}(\mathcal{A}_{\mathbf{n}})\)._ Let \(\mu,\alpha,\beta,\gamma\geqslant 0\). We then split the landscape of \(H([\mathbf{Y}])\) into the following five regions (not necessarily non-overlapping). \[\mathcal{R}_{1}\stackrel{{\mathrm{def}}}{{=}}\left\{ \mathbf{Y}\in\mathbb{R}_{*}^{n\times r}\middle|d\left([\mathbf{Y}],[\mathbf{Y}^ {*}]\right)\leqslant\mu\sigma_{r}\left(\mathbf{Y}^{*}\right)/\kappa^{*}\right\},\] \[\mathcal{R}_{2}\stackrel{{\mathrm{def}}}{{=}} \left\{\mathbf{Y}\in\mathbb{R}_{*}^{n\times r}\middle|\begin{array}{l}d \left([\mathbf{Y}],[\mathbf{Y}^{*}]\right)>\mu\sigma_{r}\left(\mathbf{Y}^{*} \right)/\kappa^{*},\|\overline{\mathrm{grad}\,H([\mathbf{Y}])}\|_{\mathrm{F}} \leqslant\alpha\mu\sigma_{r}^{3}\left(\mathbf{Y}^{*}\right)/\left(4\kappa^{*} \right),\\ \|\mathbf{Y}\|\leqslant\beta\left\|\mathbf{Y}^{*}\right\|,\left\|\mathbf{YY}^ {\top}\right\|_{\mathrm{F}}\leqslant\gamma\left\|\mathbf{Y}^{*}{\mathbf{Y}^ {*}}^{\top}\right\|_{\mathrm{F}}\end{array}\right\},\] \[\mathcal{R}_{3}^{\prime}\stackrel{{\mathrm{def}}}{{=}} \left\{\mathbf{Y}\in\mathbb{R}_{*}^{n\times r}\middle|\begin{array}{l}\| \overline{\mathrm{grad}\,H([\mathbf{Y}])}\|_{\mathrm{F}}>\alpha\mu\sigma_{r}^{ 3}\left(\mathbf{Y}^{*}\right)/\left(4\kappa^{*}\right),\|\mathbf{Y}\|\leqslant \beta\left\|\mathbf{Y}^{*}\right\|,\\ \|\mathbf{Y}\mathbf{Y}^{\top}\|_{\mathrm{F}}\leqslant\gamma\left\|\mathbf{Y}^ {*}{\mathbf{Y}^{*}}^{\top}\right\|_{\mathrm{F}}\end{array}\right\},\] \[\mathcal{R}_{3}^{\prime\prime}\stackrel{{\mathrm{def}}}{{=}} \left\{\mathbf{Y}\in\mathbb{R}_{*}^{n\times r}\middle|\|\mathbf{Y}\mathbf{Y}^ {\top}\right\|_{\mathrm{F}}>\gamma\left\|\mathbf{Y}^{*}{\mathbf{Y}^{*}}^{\top }\right\|_{\mathrm{F}}\end{array}\right\}, \tag{3.3}\] We show that for small values of \(\mu\), the _loss function is geodesically convex_ in \(\mathcal{R}_{1}\). \(\mathcal{R}_{2}\) is then defined as the region outside of \(\mathcal{R}_{1}\) such that the Riemannian gradient is small relative to \(\mu\). Hence this is the region in which we are close to the saddle points. We show that for this region there is _always an escape direction_ (i.e., directions where the Hessian is strictly negative). Finally, \(\mathcal{R}_{3}^{\prime}\), \(\mathcal{R}_{3}^{\prime\prime}\), and \(\mathcal{R}_{3}^{\prime\prime\prime}\) are the remaining regions. We show that the _Riemannian gradient is large_ (relative to \(\mu\)) in these regions. Finally, it is easy to see that \(\mathcal{R}_{1}\bigcup\mathcal{R}_{2}\bigcup\mathcal{R}_{3}^{\prime}\cup \mathcal{R}_{3}^{\prime\prime}\bigcup\mathcal{R}_{3}^{\prime\prime\prime}= \mathbb{R}_{*}^{n\times r}\). We are now ready to state the first of our main results from this section. **Theorem 3.1** (Local Geodesic Strong Convexity and Smoothness of Equation 3.1).: _Suppose \(0\leqslant\mu\leqslant\kappa^{*}/3\). Given that Assumption 3.1 holds, for any \(\mathbf{Y}\in\mathcal{R}_{1}\) defined in Equation 3.3._ \[\sigma_{\min}(\overline{\mathrm{Hess}\,H([\mathbf{Y}])})\geqslant\left(2\, (1-\mu/\kappa^{*})^{2}-(14/3)\mu\right)\sigma_{r}\left(\mathcal{A}_{\mathbf{n} }\right)-2\sigma_{r+1}(\mathcal{A}_{\mathbf{n}}),\] \[\sigma_{\max}(\overline{\mathrm{Hess}\,H([\mathbf{Y}])})\leqslant 4 \left(\sigma_{1}\left(\mathbf{Y}^{*}\right)+\mu\sigma_{r}\left(\mathbf{Y}^{*} \right)/\kappa^{*}\right)^{2}+14\mu\sigma_{r}^{2}\left(\mathbf{Y}^{*}\right)/3\] _In particular, if \(\mu\) is further chosen such that \(\left(2\,(1-\mu/\kappa^{*})^{2}-(14/3)\mu\right)\sigma_{r}\left(\mathcal{A}_{ \mathbf{n}}\right)-2\sigma_{r+1}(\mathcal{A}_{\mathbf{n}})>0\), we have \(H([\mathbf{Y}])\) is geodesically strongly convex and smooth in \(\mathcal{R}_{1}\)._ Theorem 3.1 guarantees that the optimization problem Equation 3.1 is geodesically strongly convex and smooth in a neighborhood of \([\mathbf{Y}^{*}]\). It also shows that if \(\mathbf{Y}\) is close to the global minimizer, then Riemannian gradient descent converges to the global minimizer of the quotient space linearly. Next, to analyze \(\mathcal{R}_{2}\), we need to understand the other first-order stationary points (FOSP). **Theorem 3.2** (FOSP of Equation 3.1).: _Let \(\mathbf{Y}^{*}=\overline{\mathbf{U}}\cdot\overline{\mathbf{\Lambda}}\cdot \overline{\mathbf{V}}^{\top}\) and \(\mathbf{Y}=\mathbf{U}\mathbf{D}\mathbf{V}^{\top}\) be the SVDs. Then for any \(S\) subset of \([n]\), we have that \([\overline{\mathbf{U}}_{S}\mathbf{\Lambda}_{S}\mathbf{V}_{S}^{\top}]\) is a Riemannian FOSPs of Equation 3.1. Further, these are the only Riemannian FOSPs._ Theorem 3.2 shows that the linear combinations of eigenvectors can be used to construct Riemannian first-order stationary points (FOSP) of Equation 3.1. This theorem also shows that there are many FOSPs of Equation 3.2. This is quite different from the regime studied in Luo & Garcia Trillos (2022). In general, gradient descent is known to converge to a FOSP. Hence one might expect that if we initialized near one of the saddle points, then we might converge to that saddle point. However, our next main result of the section shows that even if we initialize near the saddle, there always exist escape directions. **Theorem 3.3** (Escape Directions).: _Assume that Assumption 3.1 holds. Then for sufficiently small \(\alpha\) and any \(\mathbf{Y}\in\mathcal{R}_{2}\) that is not an FOSP, there exists \(C_{1}(\mathcal{A}_{\mathbf{n}})>0\) and \(\theta_{\mathbf{Y}}\) such that_ \[\overline{\operatorname{Hess}H([\mathbf{Y}])}\left[\theta_{\mathbf{Y}},\theta _{\mathbf{Y}}\right]\leqslant-\,C_{1}(\mathcal{A}_{\mathbf{n}})\left\|\theta_ {\mathbf{Y}}\right\|_{\mathrm{F}}^{2}.\] In particular, it is possible to exactly quantify the size of \(\alpha\) and the explicitly construct the escape direction \(\theta_{\mathbf{Y}}\). See Theorem H.1 in the appendix for more details. **Remark 3.1**.: _Theorem 3.3 guarantees that if \(\mathbf{Y}\) is close to some saddle points, then \(\theta_{\mathbf{Y}}\) will make its escape from the saddle point linearly._ Finally, the next result tells that is we are not close to a FOSP, then we have large gradients. **Theorem 3.4** ((Regions with Large Riemannian Gradient of Equation 1.5).: 1. \(\|\overline{\operatorname{grad}H([\mathbf{Y}])}\|_{\mathrm{F}}>\alpha\mu \sigma_{r}^{3}\left(\mathbf{Y}^{*}\right)/\left(4\kappa^{*}\right),\,\forall \mathbf{Y}\in\mathcal{R}_{3}^{\prime}\)_;_ 2. \(\|\overline{\operatorname{grad}H([\mathbf{Y}])}\|_{\mathrm{F}}\;\geqslant\;2 \left(\|\mathbf{Y}\|^{3}-\|\mathbf{Y}\|\,\|\mathbf{Y}^{*}\|^{2}\right)\;>\;2 \left(\beta^{3}-\beta\right)\|\mathbf{Y}^{*}\|^{3}\,,\quad\forall\mathbf{Y}\in \mathcal{R}_{3}^{\prime\prime}\)_;_ 3. \(\langle\overline{\operatorname{grad}H([\mathbf{Y}])},\mathbf{Y}\rangle>2(1-1/ \gamma)\left\|\mathbf{Y}\mathbf{Y}^{\top}\right\|_{\mathrm{F}}^{2},\quad \forall\mathbf{Y}\in\mathcal{R}_{3}^{\prime\prime\prime}.\)__ _In particular, if \(\beta>1\) and \(\gamma>1\), we have the Riemannian gradient of \(H([\mathbf{Y}])\) has large magnitude in all regions \(\mathcal{R}_{3}^{\prime},\mathcal{R}_{3}^{\prime\prime}\) and \(\mathcal{R}_{3}^{\prime\prime\prime}\)._ **Remark 3.2**.: _These results can be seen as an under-parameterized generalization to the regression problem of Section 5 in Luo & Garcia Trillos (2022). The proof in Luo & Garcia Trillos (2022) is simpler because in their setting there are no saddle points or local minima that are not global. Conceptually, Tarmoun et al. (2021b) proves that in the setting \(r\geq n\), the gradient flow for Equation 1.5 converges to a global minimum linearly. We complement this result by studying the case \(r<n\)._ **Remark 3.3**.: _To demonstrate strong geodesic convexity, the eigengap assumption is necessary as it prevents multiple global solutions. However, it is possible to relax this assumption and instead deduce a PL condition, which would also imply a linear convergence rate for a first-order method._ **Remark 3.4**.: _In the specific case of \(\mathcal{A}_{\mathbf{n}}\) as in Equation 1.3, and under Assumptions 2.1, Assumption 3.1 should be interpreted as \(\sigma_{r}^{\mathcal{M}}<\sigma_{r+1}^{\mathcal{M}}\), as suggested by Remark E.1. Also, \(\mu\) must be taken to be in the order \(\varepsilon^{2}\). The scale \(\varepsilon^{2}\) is actually a natural scale for this problem, since, as discussed in Remark G.3, the energy gap between saddle points and the global minimizer \([\mathbf{Y}^{*}]\) is \(O(\varepsilon^{2})\)._ ## 4. Conclusions We have explored some theoretical aspects of Spectral Neural Networks (SNN), a framework that substitutes the use of traditional eigensolvers with suitable neural network parameter optimization. Our emphasis has been on approximation theory, specifically identifying the minimum number of neurons of a multilayer NN required to capture spectral geometric properties in data, and investigating the optimization landscape of SNN, even in the face of its non-convex ambient loss function. For our approximation theory results we have assumed a specific proximity graph structure over data points that are sampled from a distribution over a smooth low-dimensional manifold. A natural future direction worth of study is the generalization of these results to settings where data points, and their similarity graph, are sampled from other generative models, e.g., as in the application to contrastive learning in HaoChen et al. (2021). To carry out this generalization, an important first step is to study the regularity properties of eigenvectors of an adjacency matrix/graph Laplacian generated from other types of probabilistic models. At a high level, our approximation theory results have sought to bridge the extensive body of research on graph-based learning methods, their ties to PDE theory on manifolds, and the approximation theory for neural networks. While our analysis has focused on eigenvalue problems, such as those involving graph Laplacians or Laplace Beltrami operators, we anticipate that this overarching objective can be extended to develop provably consistent methods for solving a larger class of PDEs on manifolds with neural networks. We believe this represents a significant and promising research avenue. On the optimization front, we have focused on studying the landscape of the ambient space problem 1.5. This has been done anticipating the use of our estimates in a future analysis of the training dynamics of SNN. We reiterate that the setting of interest here is different from other settings in the literature that study the dynamics of neural network training in an appropriate scaling limit --leading to either a neural tangent kernel (NTK) or to a mean field limit. This difference is mainly due to the fact that the spectral contrastive loss \(\ell\) (see 1.1) of SNN is non-convex, and even local strong convexity around a global minimizer does not hold in a standard sense and instead can only be guaranteed when considered under a suitable quotient geometry.
2303.04477
Graph Neural Networks Enhanced Smart Contract Vulnerability Detection of Educational Blockchain
With the development of blockchain technology, more and more attention has been paid to the intersection of blockchain and education, and various educational evaluation systems and E-learning systems are developed based on blockchain technology. Among them, Ethereum smart contract is favored by developers for its ``event-triggered" mechanism for building education intelligent trading systems and intelligent learning platforms. However, due to the immutability of blockchain, published smart contracts cannot be modified, so problematic contracts cannot be fixed by modifying the code in the educational blockchain. In recent years, security incidents due to smart contract vulnerabilities have caused huge property losses, so the detection of smart contract vulnerabilities in educational blockchain has become a great challenge. To solve this problem, this paper proposes a graph neural network (GNN) based vulnerability detection for smart contracts in educational blockchains. Firstly, the bytecodes are decompiled to get the opcode. Secondly, the basic blocks are divided, and the edges between the basic blocks according to the opcode execution logic are added. Then, the control flow graphs (CFG) are built. Finally, we designed a GNN-based model for vulnerability detection. The experimental results show that the proposed method is effective for the vulnerability detection of smart contracts. Compared with the traditional approaches, it can get good results with fewer layers of the GCN model, which shows that the contract bytecode and GCN model are efficient in vulnerability detection.
Zhifeng Wang, Wanxuan Wu, Chunyan Zeng, Jialong Yao, Yang Yang, Hongmin Xu
2023-03-08T09:58:58Z
http://arxiv.org/abs/2303.04477v1
# Graph Neural Networks Enhanced Smart Contract Vulnerability Detection of Educational Blockchain ###### Abstract With the development of blockchain technology, more and more attention has been paid to the intersection of blockchain and education, and various educational evaluation systems and E-learning systems are developed based on blockchain technology. Among them, Ethereum smart contract is favored by developers for its "event-triggered" mechanism for building education intelligent trading systems and intelligent learning platforms. However, due to the immutability of blockchain, published smart contracts cannot be modified, so problematic contracts cannot be fixed by modifying the code in the educational blockchain. In recent years, security incidents due to smart contract vulnerabilities have caused huge property losses, so the detection of smart contract vulnerabilities in educational blockchain has become a great challenge. To solve this problem, this paper proposes a graph neural network (GNN) based vulnerability detection for smart contracts in educational blockchains. Firstly, the bytecodes are decoupled to get the opcode. Secondly, the basic blocks are divided, and the edges between the basic blocks according to the opcode execution logic are added. Then, the control flow graphs (CFG) are built. Finally, we designed a GNN-based model for vulnerability detection. The experimental results show that the proposed method is effective for the vulnerability detection of smart contracts. Compared with the traditional approaches, it can get good results with fewer layers of the GCN model, which shows that the contract bytecode and GCN model are efficient in vulnerability detection. educational blockchain, smart contract, byte-code, vulnerability detection ## I Introduction The education blockchain refers to the use of blockchain as technical support when carrying out reforms to the traditional education systems [1, 2]. The white paper on blockchain technology released by China in 2016 states that "the transparency and immutability of the blockchain system are perfectly suitable for student credit management, further education and employment, academics, qualification certification, and industry-academia cooperation, and are of great value to the healthy development of education and employment" [3]. According to the visual analysis of the blockchain in education [4, 5, 6], blockchain technology is using its decentralized feature to break the absolute management power of traditional education administrators over education and promote the development of education in the direction of more equity [7, 8]. With the creation and development of Ethernet smart contract technology, programs can be implemented to automatically execute without third-party intervention after meeting the conditions to achieve functions such as controlling the assets of the blockchain and storing data information. By embedding smart contracts, blockchain technology can build virtual economy education intelligent transaction systems [9], which can promote the construction of a new system combining the Internet and education, avoid the limitations of the traditional education model in space and time to a certain extent, and help promote the change of the education system and accelerate its development. Blockchain-based smart contract systems have many advantages, such as ensuring the authenticity [10] and security of information [11, 12], saving human resources, improving the efficiency of program execution, etc. However, smart contracts are not absolutely secure. Different security vulnerabilities may exist throughout the life cycle of a smart contract, and due to the published code cannot be modified, the security problems caused by smart contract vulnerabilities will increase, so it is especially important to improve its security. For example, the main detection in this paper is a timestamp dependency vulnerability. Smart contracts use timestamps to control certain important block control flow decisions, and if an attacker masquerades as a miner, they can bypass certain operations in the contract that are restricted by timestamps by maliciously controlling the range of timestamp generation. With the continuous development of deep learning techniques, some scholars have proposed the use of these techniques for vulnerability detection to make it more accurate, comprehensive, and efficient. This paper uses Control Flow Graph (CFG) built based on bytecode files of smart contracts, use it as the input of a graph neural network, and builds a Graph Convolutional Network (GCN) model to realize vulnerability detection of smart contracts. The contributions of this paper can be summarized as follows: * A GCN model is built and successfully predicts contract vulnerabilities for the educational blockchain. * The vulnerabilities can be effectively detected through the bytecode files of smart contracts. * The accuracy of model prediction can be increased if semantic processing is added or classification of edges is added. The rest of the paper is organized as follows. In Section 2, we review the related work. In Section 3, we introduce the main research methods, including CFG composition and GCN model. In Section 4, we describe the details of the experiment and the results. Finally, we have a summary of this work in Section 5. ## II Related Work We first introduce the contractual vulnerability detection methods that are now available. Then we summarize the development of graph convolutional neural networks. ### _Contract Vulnerability Detection Methods_ In response to the security problems caused by smart contracts, numerous research teams at home and abroad have proposed solutions that seek to protect users' property security and data security. The current detection techniques mainly include two types, one is based on non-deep learning methods and the other is based on deep learning methods. A non-deep learning-based method, the automated contract vulnerability mining tool Oyente [13], is a symbolic execution-based analysis method. Using the bytecode file of a smart contract as input, after analyzing the bytecode and constructing the CFG, the Z3 solver is used to analyze the conditional jumps in the contract, which can predict whether there are seven types of vulnerabilities such as integer overflow errors and reentrant vulnerabilities for that contract. Another non-deep learning-based approach, ContractFuzzer, is a fuzzy test-based detection tool. It consists of two parts, an offline EVM staking tool and an online fuzzy testing tool [14]. The tool generates legally valid inputs and mutated inputs that cross the valid boundary by analyzing the bytecode of the smart contract as well as the ABI interface; after starting the fuzzy test, the detection results of the contract can be obtained through the execution log. A deep learning-based approach uses RNN networks. an RNN is a recurrent neural network that uses sequence data as input to a neural network, recursively in the direction of sequence data and with all recurrent units connected in a chain-like manner [15]. In the RNN network model proposed in the literature [16], two layers of threshold recursive units (GRUs) are connected after the embedding layer, and the fully connected layer is connected afterward. This experiment demonstrates that vulnerability detection can be done using smart contract operation sequences combined with deep learning networks. Another deep learning-based method can use the Long Short-Term Memory Network (LSTM) [17], a model that constructs three gates: input, output, and forgetting gates, implementing an optimization of the RNN and therefore providing further performance improvements. The use of this network model for vulnerability detection is proposed in the literature [18], using a binary vector encoding representing the opcode of a smart contract as an input to the network model, and its experimental results show more effective detection results in contrast to non-deep learning methods. ### _Graph Convolutional Network Model_ The graph neural network model used in this paper is the GCN. It is a model evolved from Spectral CNN and Chebyshev Network (ChebNet) [19]. The important architecture of GCN includes a graph convolution layer, a graph readout layer, and a graph regularization layer to improve model generalization performance and a graph pooling layer to reduce the number of computational parameters. The GCN model is essentially the same as a Convolutional Neural Network (CNN) [20], i.e., it aggregates pro-domain information for operation, but the difference is that the GCN model applies to data with a non-Euclidean structure. Because the GCN network deals with graph structures, it needs to be represented as multiple files during data pre-processing, such as adjacency matrix, number of nodes and information, number of edges and information, etc. Since its introduction, the GCN network model has received a lot of attention from scholars from all walks of life and has been actively applied to various application sites of graph data. Currently, the GCN network model has been applied to the blockchain, biochemistry, traffic prediction, computer vision, and other fields with promising results [21]. ## III Proposed Method Smart contracts are run by EVM, which first compiles the source code into bytecode and then runs it as bytecode, so it is more realistic to use bytecode files as the basis for vulnerability detection. Inspired by the success of deep learning in the fields of data mining [22, 23, 24, 2, 25], computer vision [26, 27, 20, 21, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 188, 189, 200, 21, 22, 23, 24, 25, 26, 27, 28, 29, 21, 23, 25, 26, 29, 22, 27, 29, 28, 22, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 72, 73, 74, 75, 76, 77, 78, 79, 80, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 99, 111, 12, 13, 14, 14, 15, 16, 17, 17, 18, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 38, 37, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 83, 84, 85, 86, 87, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 99, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 99, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 9, 2. Divide the basic blocks, add dependent edges to the basic blocks to build CFG, and use them as input to the GCN; 3. Define the convolutional network layer of GCN; 4. Add pooling layer, fully connected layer, etc. to build a complete GCN model. ### _Byte Code Analysis_ #### Iii-A1 Contract Bytecode Structure The source code of a smart contract is compiled to generate bytecode, which is divided into three parts: deployment code, runtime code, and auxdata. When EVM builds a contract, it first creates the contract user, then runs the deployment code and deposits the two parts, runtime code and auxdata, onto the blockchain, and in the actual operation of the contract, it is the runtime code that runs; the last 43 bytes of each contract are auxdata, which will be saved following the runtime code. An example of bytecode structure is given below as shown in Fig. 2. #### Iii-A2 Assembly Opcode The decompiled code can be obtained by disassembling the bytecode. The decompiled code consists of two parts: the instruction address and the instruction opcode. Since the smart contract only runs the runtime code part when it is executed, the decompilation operation on the bytecode only needs to operate on the runtime code part. Up to now, EVM has used 145 opcodes, which can be divided into arithmetic operation instructions, comparison operation instructions, per-bit operation instructions, cryptographic calculation instructions, stack, memory, and storage operation instructions, jump instructions, block, and smart contract related instructions, etc. according to their functions. The specific opcodes are divided as shown in Table I. An example of decompiling the bytecode is given below, as shown in Fig. 3. ### _Control Flow Graph Generation_ Building a CFG using the bytecode of a smart contract involves the following main steps. 1. Disassembling the hexadecimal bytecode file to obtain the corresponding assembly opcode. 2. Dividing the opcode into some basic blocks according to the rules for building basic blocks. 3. Calculating the destination address of each basic block according to transfer instructions such as jump instructions and conditional instructions, and adding edges between the corresponding two basic blocks, thus completing the construction of the control flow graph (CFG). Fig. 1: A description of the experimental procedure. The first part is the process of constructing CFG, and the second part is the process of constructing GCN model and inputting data for vulnerability prediction. \begin{table} \begin{tabular}{c|c|c} \hline **OPCODE** & **FUNCTION** & **EXAMPLE** \\ \hline 0.00\(\pm\) 0.00B & Stopping and Arithmetic Operation & ADD, SUB, STOP, DIV \\ 0.01\(\pm\) 0.01A & Comparison and By-bit Logic Operations & GT, LT, ISO \\ 0.020 & Encryption & SHA3 \\ 0.03\(\pm\) 0.03E & Environmental Information & ADDRESS, CALLER \\ 0.04\(\pm\) 0.045 & Block Operations & BLOCKHASH, COMPASE \\ 0.05\(\pm\) 0.5B & Storage and Execution & POP, JUMP, LIMIT \\ 0.05\(\pm\) 0.5B & Push Operation & PUSH1 - PUSH32 \\ 0.08\(\pm\) 0.8F & Copy Command & DUT-DUT-DUT6 \\ 0.09\(\pm\) 0.9F & Exchange instructions & SWAP1 - SWAP16 \\ 0.0A0 -0.8A/4 & Logging Instructions & LOG0 - LOG4 \\ 0.0F0 -0.8F & System Command & CALL, RETURN \\ \hline \end{tabular} \end{table} Table I: The classification of EVM opcodes and the functions of each category, with a few examples. Fig. 3: An example of decompiling the bytecode file of a smart contract to get the opcode. Fig. 2: The bytecode file of a smart contract consists of three parts: deployment code, runtime code, and auxdata. 4. Based on CFG, sequential dependent edges are added between the sequentially executed basic blocks to improve the graph structure. The above section has analyzed the bytecode of the smart contract and described how to get the opcode. The next section describes how to build the CFG. #### Iii-B1 Basic Block Division A basic block is a maximized sequence of instructions in which the execution of an instruction can only start from the first instruction and end with the last instruction. A code file can generate a graph structure by dividing the basic blocks and adding jump dependencies and sequential dependencies. The following are three basic principles for constructing a basic block. 1. If this instruction is the first instruction of a program or subroutine, the current basic block should be terminated and a new basic block should be opened with this instruction as the first instruction in it. 2. If this instruction is a jump statement or branch statement, etc., the instruction should be used as the last instruction of the current basic block, and then the basic block should be terminated. 3. If the instruction does not belong to the above two cases, it is added directly to the current basic block. An example of a bytecode file divided into basic blocks is given below, as shown in Fig. 4. #### Iii-B2 CFG Structure Construction After the work of dividing the basic blocks is completed, it is necessary to add new edges to the basic blocks in combination with assembly instructions, i.e., the jumping relationships between the basic blocks. The complete diagram structure after adding sequential edges to the basic blocks divided in the above section is shown in Fig. 5. ### _GCN Model_ #### Iii-C1 Convolutional Layer Definition The underlying equation of GCN is shown in equation 1. \[H^{l+1}=\sigma(\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2}}H^{l}w ^{l}) \tag{1}\] where \(H^{l}\) is the input feature of the lth layer and \(H^{l+1}\) is the output feature. \(w^{l}\) is the linear transformation matrix, i.e., the weight matrix that the model needs to learn, and \(\sigma(\cdot)\) is the nonlinear activation function, such as ReLU, Sigmoid, etc. \(\tilde{A}\) is the adjacency matrix with self-connections (hereafter referred to as the self-connected adjacency matrix), defined as shown in equation 2. \[\tilde{A}=A+I \tag{2}\] \(A\) is the adjacency matrix and \(I\) is the unit matrix. In the adjacency matrix, the elements at the diagonal positions represent the relationship between the node and itself, while the elements at the non-diagonal positions represent the relationship between the node and the node. If a node is not connected to itself, the element at the diagonal position is 0. However, such a setting will cause problems in subsequent calculations, i.e., it is impossible to distinguish between "own nodes" and "unconnected nodes" (both of which have the corresponding element position of 0). An example is given below to better illustrate the definition of the matrix, as shown in Fig. 6. \(\tilde{D}\) is the degree matrix of the self-connected matrix, defined as shown in equation 3. \[\tilde{D_{ij}}=\sum_{j}\tilde{A}_{ij} \tag{3}\] The definition of the degree matrix is still illustrated below using the data in Fig. 6, where \(\tilde{D}^{-\frac{1}{2}}\) is the inverse of the square root taken from the basis of the self-connected degree matrix, as shown in equation 4. \[\tilde{A}=\begin{bmatrix}1&1&1&1&0\\ 0&1&0&0&0\\ 0&0&1&1&0\\ 0&0&0&1&1\\ 1&0&0&0&1\end{bmatrix}\tilde{D}=\begin{bmatrix}4&0&0&0&0\\ 0&1&0&0&0\\ 0&0&2&0\\ 0&0&0&2&0\\ 0&0&0&0&2\end{bmatrix}\tilde{D}=\begin{bmatrix}1&1&1&1&0\\ 0&1&0&0\\ 0&1&1&0\\ 0&0&0&1&1\\ 1&0&0&0&1\end{bmatrix} \tag{4}\] #### Iii-C2 GCN Model Definition The model is used to predict the label \(\hat{y}\), when \(\hat{y}=1\), it indicates that there is some vulnerability, otherwise, it indicates that the smart contract Fig. 4: According to the three principles of dividing basic blocks, the opcode obtained by decompiling can be divided into several basic blocks. is secure. The network model is described below, the specific network model is shown in Fig. 7. The network model consists of an input layer, an output layer, and some hidden layers, where each layer is computed and the results are fed into the activation function \(ReLU(\cdot)\). After several layers of computation, a prediction label is an output by the output layer, where 1 indicates that the contract has some kind of vulnerability, otherwise it indicates that the contract is secure. The process of CFG construction has been described in the previous section, using the adjacency matrix A and the node feature matrix X to represent the corresponding CFG as the input to the network model. Since the work in this paper does not involve natural semantic processing for the operand part, the node feature matrix X will be used instead of the unit matrix. In this paper, we try to detect smart contracts using a network model containing hidden layers from 1 to 4 layers, examine the effect of the number of network layers on each evaluation metric, and analyze the reasons for metric changes. ## IV Experimental Results and Analysis In this section, we first introduce the dataset for the experiments, and then describe the details and results of the experiments. ### _Smart Contracts Dataset_ The current public dataset of smart contracts is in the form of source code, so you need to compile the smart contract source code file based on the public dataset and get the smart contract bytecode file according to the compiler version declared inside the contract. It should also be noted that different versions of smart contract compilers are not compatible with each other, so the compilation process should strictly follow the declared compiler version to avoid problems due to the compiler version. In this paper, we use the publicly available source code dataset for compilation, produce a dataset containing 1420 Fig. 5: After getting the basic blocks, we first add jump edges between the basic blocks according to the jump logic of the opcode, and then add sequential edges between the basic blocks according to the code running flow. Fig. 6: Definition of \(\bar{A}\) matrix. Fig. 7: The GCN model structure used in this paper uses a network model with different layers of hidden layers to predict contract vulnerabilities. bytecodes, and assign a label to each data (set to 1 for the existence of vulnerabilities, otherwise set to 0), among which 472 contain timestamp-dependent vulnerabilities. The dataset was divided into a training set and a test set according to 8:2. ### _Experimental Results_ In this paper, four metrics are used to judge the effectiveness of the model for vulnerability prediction, namely Accuracy, Recall, Precision, and F1-score. TP, FN, FP, and TN are used to represent the classification of the prediction results, where TP denotes contracts that detect the presence of vulnerabilities but have vulnerabilities, FN denotes contracts that detect no exist but have vulnerabilities, FP denotes contracts that are detected to have vulnerabilities but do not have vulnerabilities, and TN denotes contracts that are detected not to have vulnerabilities but do not have vulnerabilities. Accuracy represents the ratio of the number of correctly detected contracts to the number of all contracts and is calculated as shown in equation 5. \[Accuracy=\frac{TP+TN}{TP+FN+FP+TN} \tag{5}\] Recall represents the ratio of the number of contracts detected with vulnerabilities to the number of all contracts containing vulnerabilities, and is calculated as shown in equation 6. \[Recall=\frac{TP}{TP+FN} \tag{6}\] Precision represents the ratio of the number of contracts detected as vulnerable and having vulnerabilities to the number of all contracts detected as containing vulnerabilities and is calculated as shown in equation 7. \[Precision=\frac{TP}{TP+FP} \tag{7}\] The F1-score is a comprehensive assessment metric that balances accuracy and recall and can be considered as the inverse average of accuracy and recall, which is calculated as shown in equation 8. \[F1-score=2*\frac{Presicion*Recall}{Presicion+Recall} \tag{8}\] During the experiment, the number of layers of the GCN model was changed to observe the changes of each index, and the results are shown in Fig. 8 (for the convenience of graphing, the "result"100" is done on each result). From the experimental results, it is clear that the accuracy and F1 scores show an overall decreasing trend as the number of network layers increases. And it is easy to observe that there is a greater decrease in recall in the network models with 5 and 6 layers; there is also a small decrease in accuracy in the network model with 6 layers. According to the structure of neural networks, it is known that the more layers of hidden layers, in addition to the input and output layers, the more significant the non-linearity is. In the process of model learning, which is the process of adjusting and optimizing the weights and thresholds of each connection, the neurons in the latter layer receive the abstract data from the processed neurons in the previous layer, so the higher the number of layers of the network model, the higher it's level of abstraction, and it will show better results on some specific tasks. However, in this problem, it is obvious that an excessively deep network level is not needed, and it is clear from the experimental results that the prediction results of a 3 or 4-layer network are more informative. In addition, the work in this paper does not incorporate the semantic processing part, and the features of the graph nodes are not well characterized, which is guessed to be the reason for the low precision and F1-Score. The following conjectures may improve the accuracy of the model's prediction at present. 1. Add the semantic feature processing part. After decompiling to get the decompiled code, adding the part of natural language processing can get more optimized node feature data and make the prediction results more accurate. 2. Further classify the edges of the control flow graph, for example, they can be divided into conditional jumping edges, and sequential jumping edges, to optimize the feature data. The current existing research has uneven work in the part of generating feature data, and the node feature data can describe the node meaning, which has a relatively large impact on the model prediction results. ### _Experiment Comparison_ This paper lists the results of other vulnerability detection methods for smart contracts, of which there are three based on non-deep learning, respectively, a smart contract automatic audit tool oyente [13], an inspection tool based on symbolic execution techniques Mythril [43], and a static analysis tool Smart check [44]; two deep learning based methods LSTM and GRU. The specific comparison results are shown in Table II. Among these methods, the one that can achieve the best results is Mythril, with an accuracy rate of 61.08%. The reason is that its detection principle is relatively complex and requires taint analysis and other related technologies, so it Fig. 8: Results of ablation experiments. can achieve better results. The accuracy of deep learning-based vulnerability detection methods is slightly above 50%, but the GCN model used in this paper has better results. It has promising results under the conditions of using the basic network model and the basic composition strategy, which is sufficient to show the effectiveness of using smart contract bytecode files for vulnerability detection and the GCN model. Optimizing feature data and network models on this basis is more likely to result in better data. ## V Conclusion This paper introduces a method of applying graph neural networks to smart contract vulnerability detection, and the experimental results show that vulnerability detection using bytecode is a feasible detection method. When constructing the network model, it is important to choose the appropriate network depth and not blindly increase the number of hidden layers. Subsequent research of such work should focus on how to generate graph structures, whether using smart contract source code or smart contract bytecode, the feature data should be able to better express the invocation relationship between functions, the execution process of the contract, and the semantics of the contract instructions. On this basis with a suitable graph neural network model, the prediction results can be further optimized. ## Acknowledgments The research work of this paper were supported by the National Natural Science Foundation of China (No. 62177022, 61901165, 61501199), Collaborative Innovation Center for Informatization and Balanced Development of K-12 Education by MOE and Hubei Province (No. xtzd2021-005), and Self-determined Research Funds of CCNU from the Colleges' Basic Research and Operation of MOE (No. CCNU22QN013).
2304.11963
Optimal Design of Neural Network Structure for Power System Frequency Security Constraints
Recently, frequency security is challenged by high uncertainty and low inertia in power system with high penetration of Renewable Energy Sources (RES). In the context of Unit Commitment (UC) problems, frequency security constraints represented by neural networks have been developed and embedded into the optimization problem to represent complicated frequency dynamics. However, there are two major disadvantages related to this technique: the risk of overconfident prediction and poor computational efficiency. To handle these disadvantages, novel methodologies are proposed to optimally design the neural network structure, including the use of asymmetric loss function during the training stage and scientifically selecting neural network size and topology. The effectiveness of the proposed methodologies are validated by case study which reveals the improvement of conservativeness and mitigation of computation performance issues.
Zhuoxuan Li, Zhongda Chu, Fei Teng
2023-04-24T09:59:28Z
http://arxiv.org/abs/2304.11963v2
# Optimal Design of Neural Network Structure for Power System Frequency Security Constraints ###### Abstract Recently, frequency security is challenged by high uncertainty and low inertia in power system with high penetration of Renewable Energy Sources (RES). In the context of Unit Commitment (UC) problems, frequency security constraints represented by neural networks have been developed and embedded into the optimization problem to represent complicated frequency dynamics. However, there are two major disadvantages related to this technique: the risk of overconfident prediction and poor computational efficiency. To handle these disadvantages, novel methodologies are proposed to optimally design the neural network structure, including the use of asymmetric loss function during the training stage and scientifically selecting neural network size and topology. The effectiveness of the proposed methodologies are validated by case study which reveals the improvement of conservativeness and mitigation of computation performance issues. frequency security constraints, unit commitment, neural network, mixed-integer linear programming ## I Introduction As achieving net zero is globally on the agenda, power systems are expected to increase the penetration of RES. However, characterised by high uncertainty and low inertia, RES are challenging to power system frequency security. In the context of UC problem, frequency security is traditionally considered by static approaches such as setting specific requirements for reserve, inertia or kinetic energy [1]. Due to the vague representations of frequency dynamics, static approaches usually leave excessive security margin, leading to significant uneconomical dispatch plans. With the help of mathematical models such as multi-machine swing equation, frequency security constraints can be constructed by analytical approaches with a focus on certain frequency security indices including frequency nadir and Rate of Change of Frequency (RoCoF) [2]. For example, [3] gives an expression of post-fault frequency trajectory, and its methodology is extended in [4] where damping effect is involved into consideration. In [5], a more detailed expression of post-fault frequency trajectory is derived regarding frequency dynamics as a closed-loop problem, and nonlinear expressions of frequency security indices are incorporated into UC through piecewise linearization. Although analytical approaches provide more realistic descriptions of frequency dynamics, the accuracy is limited by the following reasons. Firstly, it is almost impossible for analytical approaches to capture frequency at different buses and the frequency at Center Of Inertia (COI), discussed in multi-machine swing equation, may not be able to accurately represent frequency at different buses especially when the inertia distribution is uneven. Secondly, turbine dynamics and governor controllers contain highly nonlinear and nonconvex components, which is difficult to be incorporated into optimization problems analytically. On the other hand, many data-driven techniques have been widely discussed for their applications in power system security analysis [6], and exploiting these techniques to formulate frequency security constraints is a rising research topic. This technique is initially introduced in [7] to identify frequency security conditions during microgrid islanding events. Later in [8], optimal classification trees, another data-driven approach, is utilised to predict frequency nadir, RoCoF, and quasi-steady state frequency, which is also integrated into UC problem with some manipulations. Similar philosophy to [6] is further extended to large-scale power systems and the high computational cost associated with the labeling bottleneck is mitigated by an active sampling algorithm [9]. Logistic regression has also been applied in constructing frequency security constraints [10] in the context of robust UC. According to the comparison in [9], neural network has the best performance among many machine learning models in terms of prediction accuracy, and is thus the most promising model to fulfill the potential of data-driven approaches. However, there are few publications discussing how such neural networks should be designed. Actually, the application of neural network in frequency security formulation is challenged by at least two factors below. Primarily, the conservativeness is not rigorously analysed, leading to the risks of unacceptable constraint violations and security issues related. Besides, the computational cost of solving UC with neural network-embedded constraints is significantly high. To deal with the challenges above, this paper studies the optimal design of neural network structure. Main contributions of this paper are twofold: 1) application of asymmetric loss function to obtain more conservative predictor, considering the cost sensitivity of frequency security prediction, and 2) determination of the neural network size to balance the trade-off between prediction accuracy and computation cost. The rest of this paper is organized as follows. Section II introduces the formulation of the problem, i.e. UC constrained frequency security constrained based on neural network. Methodologies with respect to optimal structural design of neural networks are elaborated in Section III. Section IV shows the effective of the proposed methods by a case study, followed by conclusions drawn in Section V. ## II Problem Statement Given the predicted outputs of RES, UC is usually formulated as a mixed integer linear programming (MILP) problem whose solution tells the optimal status and dispatch of synchronous generators. Consider a general day-ahead hourly dispatch scenario where the objective function, i.e., the total system operation cost is: \[\min\sum_{t=1}^{T}\sum_{g=1}^{N_{g}}(\pi_{g}^{F}u_{g,t}+\pi_{g}^{C}P_{g,t}^{G})+ \sum_{t=2}^{T}\sum_{g=1}^{N_{g}}\pi_{g}^{U}\nu_{g,t}, \tag{1}\] where \(P_{g,t}^{G}\) and \(u_{g,t}\), \(\nu_{g,t}\) are respectively the active power output and binary variables denoting the online status and starting up state of generator \(g\) at time step \(t\); \(\pi_{g}^{C}\,\pi_{g}^{F}\) and \(\pi_{g}^{U}\) are the associated cost. Conventional UC constraints related to power flow, power balance and voltage magnitudes, as well as minimum up and down time, power output and ramp of synchronous generators are omitted here and [7, 11, 12] can be referred for details. Data-driven frequency security constraints are transformed from a frequency security predictor which predicts the value of certain frequency security index of a dispatch. For ease of demonstration and comparison, methodologies proposed in [9] is adopted here. Specifically, the predictor to be built is a neural network regressor that estimates frequency nadir (minimum of all buses) after the synchronous generator with maximum output power is suddenly disconnected from the network, namely 'N - 1' principle. Note that inputs of the predictor (features) must be derived from static decision variables of UC rather than transient quantities. Based on multi-machine swing equations, the following variables are selected to formulate the feature vector [9, 13]: \[\mathbf{x}_{t}=[u_{1,t}^{G},\ldots,u_{N_{g},t}^{G},0,\ldots,0,P_{g_{max,t}}^{G},0, \ldots,0], \tag{2}\] where \(\mathbf{x}_{t}\) is the feature vector at time step \(t\), \(g_{max}\) is the index of synchronous generator with maximum active power output, i.e., \[g_{max,t}=\underset{g}{argmax}\,P_{g,t}^{G}\quad g=1,\ldots,N_{g},\forall t. \tag{3}\] It is apparent that there are \(2N_{g}\) elements in the feature vector: the first \(N_{g}\) elements are operating states of all synchronous generators, while the next \(N_{g}\) elements are zero except the \(N_{g}+g_{max}\)-th element whose value is the maximum active power output of all synchronous generators, illustrated by (3). These elements are selected as they dominate the frequency trajectory after the disconnection of synchronous generator with maximum power output. Multilayer perceptron, the most basic structure of neural network is applied for building frequency security predictor. Shown in Fig. 1, it is basically a linear model except for the nonlinear activation functions in the neurons of hidden layers. Fortunately, if ReLU, a special piecewise linear activation function defined as (4), is adopted, the entire network can be linearized and transformed into linear constraints with the manipulations in [14]. In fact, linear expressions (5a)-(5e) are equivalent to (4) where \(\underline{h}\) and \(\overline{h}\) satisfy \(\underline{h}<Z<\overline{h}\) and \(\underline{h}<0<\overline{h}\), with \(\underline{h}\) and \(\overline{h}\) being the big-M constants. \[z=\text{ReLU}(Z)=\max\{0,Z\}=\begin{cases}0&Z<0\\ Z&Z\geq 0\end{cases} \tag{4}\] \[z\leq Z-\underline{h}(1-a) \tag{5a}\] \[z\geq Z\] (5b) \[z\leq \overline{h}a\] (5c) \[z\geq 0\] (5d) \[a\in \{0,1\} \tag{5e}\] In this way, activation function ReLU can be transformed into linear constraints for a MILP problem. The definition of \(g_{max,t}\) in (3) is also linearized as follows [9]: \[P_{g_{max,t}}^{G}-P_{g,t}^{G} \leq\Gamma(1-\mu_{g,t}),\quad g=1,\ldots,N_{g},\forall t \tag{6a}\] \[\sum_{g=1}^{N_{g}}\mu_{g,t}=1 \forall t, \tag{6b}\] where \(\mu_{g,t}\) is an auxiliary binary variable and \(\Gamma\) is also a big-M constant which is always greater than the maximum output power of all synchronous generators. Other components of the neural network predictor are already linear, and the complete frequency security constraints are listed below. \[\mu_{g,t},\mu_{g_{max,t}} \in\{0,1\}\quad g,g_{max}=1,\ldots,N_{g},\forall t \tag{7a}\] \[\mathbf{x}_{t}(g) =u_{g,t}\quad\quad g=1,\ldots,N_{g},\forall t\] (7b) \[P_{g_{max,t}}^{G}-P_{g,t}^{G} \leq\Gamma(1-\mu_{g,t})\] \[g,g_{max}=1,\ldots,N_{g},\forall t\] (7c) \[\sum_{g=1}^{N_{g}}\mu_{g,t} =1 \forall t\] (7d) \[\mathbf{x}_{t}(N_{g}+g)-P_{g,t}^{G} \geq-\Gamma(1-\mu_{g,t})\;\;g=1,\ldots,N_{g},\forall t\] (7e) \[\mathbf{x}_{t}(N_{g}+g)-P_{g,t}^{G} \leq\Gamma(1-\mu_{g,t})\quad\;g=1,\ldots,N_{g},\forall t\] (7f) \[0\leq\mathbf{x}_{t}(N_{g}+g) \leq\Gamma\mu_{g,t}\quad\quad\quad\quad g=1,\ldots,N_{g},\forall t \tag{7g}\] Feature vectors are constrained by (7a) - (7g) where \(\mathbf{x}_{t}(m)\) is the \(m\)-th element of feature vector \(\mathbf{x}_{t}\). \[Z_{1,n,t} =\mathbf{w}_{1,n,t}\cdot\mathbf{x}_{t}+b_{1,n,t} \forall n,t \tag{8a}\] \[Z_{l,n,t} =\mathbf{w}_{l,n,t}\cdot\mathbf{z}_{l-1,t}+b_{l,n,t} \forall n,l,t\] (8b) \[z_{l,n,t} \leq Z_{l,n,t}-\underline{h}_{l,n}(1-a_{l,n,t})\] Fig. 1: Structure of Neural Network Frequency Security Predictor \[L(y,\hat{y})=\begin{cases}C^{+}(\hat{y}-y|)^{2}&\hat{y}\geq y\\ C^{-}(\hat{y}-y|)^{2}&\hat{y}<y\end{cases} \tag{12}\] It is obvious that factors \(C^{+}\) and \(C^{-}\) are involved to make a predictor conservative or aggressive. One can let \(C^{+}>C^{-}>0\) so that overestimation of frequency nadir, namely \(\hat{y}>y\) will be punished more during the training process, and the conservativeness can be controlled by adjusting factors \(C^{+}\) and \(C^{-}\). Note that the improved conservativeness will deteriorate the prediction accuracy, which is discussed later in Section IV. ### _Design of Neural Network Size and Topology_ Size and topology of the neural network not only determine the accuracy of the predictor, but also determine the computation cost of solving the UC with neural network constraints. According to formulations in Section II, each neuron is designated an auxiliary binary variable presenting its activation state. Hence, the numbers of integer variables and constraints increase dramatically as the size of the neural network grows. This could lead to poor tractability in computation especially when solving large-scale problems like UC, which is admitted in [18, 19]. It is necessary to take both accuracy and computational tractability into consideration when designing neural network size and topology. Unfortunately, such designing mainly relies on experiments and empirical observations. The effects of neural network sizes and topology are studied in two separate experiments: one with fixed topology but varying sizes and another with fixed size but varying topology. Accuracy and computation performance are recorded and compared in Subsection IV-C. ## IV Case Study ### _Dataset Generation_ A dataset of post fault frequency trajectories is essential to construct data-driven frequency security constraints. New England 39-bus system, a benchmark for power system stability studies [20], is modified to demonstrate a power system with high RES penetration. Specifically, four 900MW wind farms consisting of doubly-fed induction generators (DFIGs) are penetrated respectively at bus 2, 10, 20 and 25, which is displayed in Fig. 2. Parameters of this system can be found in [21]. Dynamic simulation for generating dataset is performed on MATLAB/Simulink R2021a. Full-order model of synchronous generators are implemented in simulations while wind farms are modelled as aggregated DFIGs. Because the contingency is always the disconnection of the synchronous generator with maximum power output, post fault frequency trajectories are determined by the steady state operating conditions before the contingency. Given network parameters (network topology and line admittances) and operating parameters (load distribution, generator capacities, etc.) of the power system, samples are generated by solving load flow equations whose inputs are some controllable and random variables. These parameters are properly designed to make sure that samples in the dataset have strong representativeness. Each sample represents a load flow case, i.e., containing all the essential information of a steady state of operation mode. Generated samples are setup to simulate post fault frequency trajectories after the loss of synchronous generator with maximum active power output. After filtering out unconverged trajectories, every sample are labelled with its minimum measured frequency nadir at all buses. Fig. 3 summarize the procedure of dataset generation. ### _Solution of UC_ Totally 1676 samples are generated and simulated, and the obtained dataset is used to train the neural network predictor. To validate the performance of different loss functions, neural network frequency security predictors with 256 neurons in single hidden layer, but different loss functions are trained by Tensorflow 2.4.0. Table I compares the performances of these predictors, where MAE means absolute value loss. It can be observed that L2 loss functions slightly outperforms L1 ones in terms of prediction accuracy. Moreover, the increase of conservativeness leads to decreased accuracy so that a compromise should be made between conservativeness and accuracy. Frequency security constrained UC is ready to be solved after the neural network predictor is transformed into constraints. In a UC problem modelling 24-hour ahead dispatch scheduling, total active load is setup to vary between 3GW to 7GW. Frequency security criterion is whether frequency drops more than 0.8Hz from 50Hz normal frequency. Fig. 4 presents the solutions of UC in the following three cases * no frequency security constraints * symmetric loss \(C^{+}/C^{-}=1\) * asymmetric loss \(C^{+}/C^{-}=5\) In Fig. 4, x-axis denotes the index of synchronous generator and the y-axis denotes the hourly time step. Uncommitted synchronous generators are presented as circles with empty interior while committed ones are marked as circles with fulfilled interior. Moreover, the opacity of a filled circle indicates the ratio between committed active power and maximum active power of this generator. Compared with case (a), there are much more committed synchronous generators in case (b), significantly increasing the frequency security margin. Similar dispatch planning is witnessed in the result of case (c), yielding larger frequency security margin. Consider the setups of the cases listed above, data-driven frequency security predictor based on neural network has successfully guided the dispatch planning, and the proposed application of asymmetric loss function does bring more conservative results. ### _Computation Performance_ The UC problem is solved on a PC with AMD Ryzen 5 2500U 2.00 GHz CPU and 8GB RAM. Dual simplex is deployed on Gurobi, a common solver for MILP. Dual simplex is a duality algorithm and it consecutively searches for new lower upper bounds and higher lower bounds throughout the solving process. Every upper bound is linked with a potential optimal solution, and a lower bound is a validated minimum of the objective function. The solving process finishes when the gap between the lowest upper bound and the highest lower bound, presented by MIPGap, shrinks to zero. Let all neurons except the one in output layer placed in a single hidden layer, Table II shows the relationship between accuracy and computation performance for different neural network sizes. Large neural network size improves the accuracy, but the computation performance becomes undesirable when there 8 \begin{table} \begin{tabular}{c c c c c} \hline \hline Loss & Asymmetric & Proportion of & & \\ Function & Cost Ratio & Conservative & MAE & \(R^{2}\) \\ Type & \(C^{+}/C^{-}\) & Prediction & & \\ \hline L1 & 1 (symmetric) & 48.98\% & 0.0337Hz & 0.9588 \\ L1 & 5 & 65.31\% & 0.0351Hz & 0.9553 \\ L2 & 1 (symmetric) & 48.35\% & 0.0343Hz & 0.9651 \\ L2 & 5 & 69.16\% & 0.0436Hz & 0.9599 \\ \hline \hline \end{tabular} \end{table} TABLE I: Performance of neural network frequency security predictors trained with different loss functions Fig. 4: Results of UC in three different cases Fig. 3: Procedure of Dataset Generation Fig. 2: Diagram of the Modified New England 39-bus Power System or more neurons. Actually, when there are more 32 neurons in hidden layers, it takes hours to completely solve the UC problem. The required time for completely solving UC surges with the neural network size, such that only a suboptimal solution, or an not completely validated optimal solution, can be found within 1000s. Therefore, rather than using large-size neural networks, small-size neural networks are preferred especially when the accuracy is good enough. For example, an acceptable criterion can be MAE! 0.05Hz which needs only 32 neurons. Likewise, Table III presents how topology influences prediction accuracy and computation time, based on a neural network with 32 neurons placed in one or more hidden layers. It is obvious that single hidden layer topology has smaller MIPGap after 1000s. In fact, decision variables in multi-layer topology are much more coupled than that in single layer topology. Illustratively, stronger coupling requires more computation efforts, and hence single hidden layer topology is the most computationally efficient. Although it may not be the most accurate, other topology does not significantly outperform it. Similar phenomena are also observed in neural network with other sizes, so single hidden layer is recommended for this particular issue in practice. ## V Conclusions This paper discusses the structural design of neural network as frequency security constraints. The prediction of frequency security is considered as a cost sensitive problem, and the high computational cost of solving UC with delicate neural network constraints is identified. Accordingly, methodologies are proposed to tackle these challenges when designing the neural network. Asymmetric loss function is proposed to be applied during the training of neural network to improve the conservativeness of the predictor and transformed frequency security constraints. Meanwhile, proper sizing the neural network can mitigate the poor computational performance by reducing the number of integer variables and constraints. The effectiveness of the proposed methodologies are validated by the case study, which reveals that asymmetric loss function can indeed yield more conservative dispatch. When maximum accuracy is achieved, the computation performance of solving UC is poor. However, reducing the size of neural network and adopting single hidden layer topology are proved to be practical methods to decrease computational cost while maintaining the accuracy at a satisfactory level.
2305.13508
DeepBern-Nets: Taming the Complexity of Certifying Neural Networks using Bernstein Polynomial Activations and Precise Bound Propagation
Formal certification of Neural Networks (NNs) is crucial for ensuring their safety, fairness, and robustness. Unfortunately, on the one hand, sound and complete certification algorithms of ReLU-based NNs do not scale to large-scale NNs. On the other hand, incomplete certification algorithms are easier to compute, but they result in loose bounds that deteriorate with the depth of NN, which diminishes their effectiveness. In this paper, we ask the following question; can we replace the ReLU activation function with one that opens the door to incomplete certification algorithms that are easy to compute but can produce tight bounds on the NN's outputs? We introduce DeepBern-Nets, a class of NNs with activation functions based on Bernstein polynomials instead of the commonly used ReLU activation. Bernstein polynomials are smooth and differentiable functions with desirable properties such as the so-called range enclosure and subdivision properties. We design a novel algorithm, called Bern-IBP, to efficiently compute tight bounds on DeepBern-Nets outputs. Our approach leverages the properties of Bernstein polynomials to improve the tractability of neural network certification tasks while maintaining the accuracy of the trained networks. We conduct comprehensive experiments in adversarial robustness and reachability analysis settings to assess the effectiveness of the proposed Bernstein polynomial activation in enhancing the certification process. Our proposed framework achieves high certified accuracy for adversarially-trained NNs, which is often a challenging task for certifiers of ReLU-based NNs. Moreover, using Bern-IBP bounds for certified training results in NNs with state-of-the-art certified accuracy compared to ReLU networks. This work establishes Bernstein polynomial activation as a promising alternative for improving NN certification tasks across various applications.
Haitham Khedr, Yasser Shoukry
2023-05-22T21:52:57Z
http://arxiv.org/abs/2305.13508v1
DeepBern-Nets: Taming the Complexity of Certifying Neural Networks using Bernstein Polynomial Activations and Precise Bound Propagation ###### Abstract Formal certification of Neural Networks (NNs) is crucial for ensuring their safety, fairness, and robustness. Unfortunately, on the one hand, sound and complete certification algorithms of ReLU-based NNs do not scale to large-scale NNs. On the other hand, incomplete certification algorithms--based on propagating input domain bounds to bound the outputs of the NN--are easier to compute, but they result in loose bounds that deteriorate with the depth of NN, which diminishes their effectiveness. In this paper, we ask the following question; can we replace the ReLU activation function with one that opens the door to incomplete certification algorithms that are easy to compute but can produce tight bounds on the NN's outputs? We introduce DeepBern-Nets, a class of NNs with activation functions based on Bernstein polynomials instead of the commonly used ReLU activation. Bernstein polynomials are smooth and differentiable functions with desirable properties such as the so-called range enclosure and subdivision properties. We design a novel Interval Bound Propagation (IBP) algorithm, called Bern-IBP, to efficiently compute tight bounds on DeepBern-Nets outputs. Our approach leverages the properties of Bernstein polynomials to improve the tractability of neural network certification tasks while maintaining the accuracy of the trained networks. We conduct comprehensive experiments in adversarial robustness and reachability analysis settings to assess the effectiveness of the proposed Bernstein polynomial activation in enhancing the certification process. Our proposed framework achieves high certified accuracy for adversarially-trained NNs, which is often a challenging task for certifiers of ReLU-based NNs. Moreover, using Bern-IBP bounds for certified training results in NNs with state-of-the-art certified accuracy compared to ReLU networks. This work establishes Bernstein polynomial activation as a promising alternative for improving neural network certification tasks across various NNs applications. The code for DeepBern-Nets is publicly available1. Footnote 1: [https://github.com/rcpsl/DeepBern-Nets](https://github.com/rcpsl/DeepBern-Nets) ## 1 Introduction Deep neural networks (NNs) have revolutionized numerous fields with their remarkable performance on various tasks, ranging from computer vision and natural language processing to healthcare and robotics. As these networks become integral components of critical systems, ensuring their safety, security, fairness, and robustness is essential. It is unsurprising, then, the growing interest in the field of certified machine learning, which resulted in NNs with enhanced levels of robustness to adversarial inputs [1; 2; 3; 4], fairness [5; 6; 7; 8], and correctness [9]. While certifying the robustness, fairness, and correctness of NNs with respect to formal properties is shown to be NP-hard [10], state-of-the-art certifiers rely on computing upper/lower bounds on the output of the NN and its intermediate layers [11; 12; 13; 14; 15]. Accurate bounds can significantly reduce the complexity and computational effort required during the certification process, facilitating more efficient and dependable evaluations of the network's behavior in diverse and challenging scenarios. Moreover, computing such bounds has opened the door for a new set of "certified training" algorithms [16; 17; 18] where these bounds are used as a regularizer that penalizes the worst-case violation of robustness or fairness, which leads to training NNs with favorable properties. While computing such lower/upper bounds is crucial, current techniques in computing lower/upper bounds on the NN outputs are either computationally efficient but result in loose lower/upper bounds or compute tight bounds but are computationally expensive. In this paper, we are interested in algorithms that can be both computationally efficient and lead to tight bounds. This work follows a Design-for-Certifiability approach where we ask the question; can we replace the ReLU activation function with one that allows us to compute tight upper/lower bounds efficiently? Introducing such novel activation functions designed with certifiability in mind makes it possible to create NNs that are easier to analyze and certify during their training. Our contributions in this paper can be summarized as follows: 1. We introduce DeepBern-Nets, a NN architecture with a new activation function based on Bernstein polynomials. Our primary motivation is to shift some of the computational efforts from the certification phase to the training phase. By employing this approach, we can train NNs with known output (and intermediate) bounds for a predetermined input domain which can accelerate the certification process. 2. We present Bern-IBP, an Interval Bound Propagation (IBP) algorithm that computes tight bounds of DeepBern-Nets leading to an efficient certifier. 3. We show that Bern-IBP can certify the adversarial robustness of adversarially-trained DeepBern-Nets on MNIST and CIFAR-10 datasets even with large architectures with millions of parameters. This is unlike state-of-the-art certifiers for ReLU networks, which often fail to certify robustness for adversarially-trained ReLU NNs. 4. We show that employing Bern-IBP during the training of DeepBern-Nets yields high certified robustness on the MNIST and CIFAR-10 datasets with robustness levels that are comparable--or in many cases surpassing--the performance of the most robust ReLU-based NNs reported in the SOK benchmark. We believe that our framework, DeepBern-Nets and Bern-IBP, enables more reliable guarantees on NN behavior and contributes to the ongoing efforts to create safer and more secure NN-based systems, which is crucial for the broader deployment of deep learning in real-world applications. ## 2 DeepBern-Nets: Deep Bernstein Polynomial Networks ### Bernstein polynomials preliminaries Bernstein polynomials form a basis for the space of polynomials on a closed interval [19]. These polynomials have been widely used in various fields, such as computer-aided geometric design [19], approximation theory [20], and numerical analysis [21], due to their unique properties and intuitive representation of functions. A general polynomial of degree \(n\) in Bernstein form on the interval \([l,u]\) can be represented as: \[P_{n}^{[l,u]}(x)=\sum_{k=0}^{n}c_{k}b_{n,k}^{[l,u]}(x),\qquad x\in[l,u] \tag{1}\] where \(c_{k}\in\mathbb{R}\) are the coefficients associated with the Bernstein basis \(b_{n,k}^{[l,u]}(x)\), defined as: \[b_{n,k}^{[l,u]}(x)=\frac{\binom{n}{k}}{(u-l)^{n}}(x-l)^{k}(u-x)^{n-k}, \tag{2}\] with \(\binom{n}{k}\) denoting the binomial coefficient. The Bernstein coefficients \(c_{k}\) determine the shape and properties of the polynomial \(P_{n}^{[l,u]}(x)\) on the interval \([l,u]\). It is important to note that unlike polynomials represented in power basis form, the representation of a polynomial in Bernstein form depends on the domain of interest \([l,u]\) as shown in equation 1. ### Neural Networks with Bernstein activation functions We propose using Bernstein polynomials as non-linear activation functions \(\sigma\) in feed-forward NNs. We call such NNs as DeepBern-Nets. Like feed-forward NNs, DeepBern-Nets consist of multiple layers, each consisting of linear weights followed by non-linear activation functions. Unlike conventional activation functions (e.g., ReLU, sigmoid, tanh,.), Bernstein-based activation functions are parametrized with learnable Bernstein coefficients \(\mathbf{c}=c_{0},\ldots,c_{n}\), i.e., \[\sigma(x;l,u,\mathbf{c})=\sum_{k=0}^{n}c_{k}b_{n,k}^{[l,u]}(x),\qquad x\in[l,u], \tag{3}\] where \(x\) is the input to the neuron activation, and the polynomial degree \(n\) is an additional hyperparameter of the Bernstein activation and can be chosen differently for each neuron. Figure 1 shows a simplified computational graph of the Bernstein activation and how it is used to replace conventional activation functions. Training of DeepBern-Nets.Since Bernstein polynomials are defined on a specific domain (equation 2), we need to determine the lower and upper bounds (\(\mathbf{l}^{(k)}\) and \(\mathbf{u}^{(k)}\)) of the inputs to the Bernstein activation neurons in layer \(k\), during the training of the network. To that end, we assume that the input domain \(\mathcal{D}\) is bounded with the lower and upper bounds (denoted as \(\mathbf{l}^{(0)}\) and \(\mathbf{u}^{(0)}\), respectively) known during training. We emphasize that our assumption that \(\mathcal{D}\) is bounded and known is not conservative, as the input to the NN can always be normalized to \([0,1]\), for example. Using the bounds on the input domain \(\mathbf{l}^{(0)}\) and \(\mathbf{u}^{(0)}\) and the learnable parameters of the NNs (i.e., weights of the linear layers and the Bernstein coefficients \(\mathbf{c}\) for each neuron), we will update the bounds \(\mathbf{l}^{(k)}\) and \(\mathbf{u}^{(k)}\) with each step of training by propagating \(\mathbf{l}^{(0)}\) and \(\mathbf{u}^{(0)}\) through all the layers in the network. Unlike conventional non-linear activation functions where symbolic bound propagation relies on linear relaxation techniques [22; 23], the Bernstein polynomial enclosure property allows us to bound the output of an \(n\)-th order Bernstein activation in \(\mathcal{O}(n)\) operations (Algorithm 1-line 12). We start by reviewing the enclosure property of Bernstein polynomials as follows. **Property 1** (Enclosure of Range [24]).: The enclosure property of Bernstein polynomials states that for a given polynomial \(P_{n}^{[l,u]}(x)\) of degree \(n\) in Bernstein form on an interval \([l,u]\), the polynomial lies within the convex hull of its Bernstein coefficients. In other words, the Bernstein polynomial is bounded by the minimum and maximum values of its coefficients \(c_{k}\) regardless of the input \(x\). \[\min_{0\leq k\leq n}c_{k}\leq P_{n}^{[l,u]}(x)\leq\max_{0\leq k\leq n}c_{k}, \qquad\forall x\in[l,u]. \tag{4}\] Algorithm 1 outlines how to use the enclosure property to propagate the bounds from one layer to another for a single training step in an L-layer DeepBern-Net. In contrast to normal training, we Figure 1: (Left) shows the structure of a DeepBern-Nets with two hidden layers. DeepBern-Nets are similar to Feed Forward NNs except that the activation function is a Bernstein polynomial. (Right) shows a simplified computational graph of a degree \(n\) Bernstein activation. The Bernstein basis is evaluated at the input \(x\) using \(l\) and \(u\) computed during training, and the output is then computed as a linear combination of the basis functions weighted by the learnable Bernstein coefficients \(c_{k}\). calculate the worst-case bounds for the inputs to all Bernstein layers by propagating the bounds from the previous layers. Such bound propagation can be done for linear layers using interval arithmetic [25]--referred to in Algorithm 1-line 16 as Interval Bound Propagation (IBP)--or using Property 1 for Bernstein layers (Algorithm 1-line 12). We store the resulting bounds for each Bernstein activation function. Then, we perform the regular forward step. The parameters are then updated using vanilla backpropagation, just like conventional NNs. During inference, we directly use the stored layer-wise bounds \(\mathbf{l}^{(k)}\) and \(\mathbf{u}^{(k)}\) (computed during training) to propagate any input through the network. In Appendix C.3, we show that the overhead of computing the bounds \(\mathbf{l}^{(k)}\) and \(\mathbf{u}^{(k)}\) during training adds between \(0.2\times\) to \(5\times\) overhead for the training, depending on the order \(n\) of the Bernstein activation function and the size of the network. Stable training of DeepBern-Nets.Using polynomials as activation functions in deep NNs has attracted several researchers' attention in recent years [26, 27]. A major drawback of using polynomials of arbitrary order is their unstable behavior during training due to exploding gradients-which is prominent with the increase in order [27]. In particular, for a general \(n\)th order polynomial in power series \(f_{n}(x)=w_{0}+w_{1}x+\ldots+w_{n}x^{n}\), its derivative is \(df_{n}(x)/dx=w_{1}+\ldots+nw_{n}x^{n-1}\). Hence training a deep NN with multiple polynomial activation functions suffers from exploding gradients as the gradient scales exponentially with the increase in the order \(n\) for \(x>1\). Luckily, and thanks to the unique properties of Bernstein polynomials, DeepBern-Net does not suffer from such a limitation as captured in the next result, whose proof is given in Appendix A.1. **Proposition 2.1**.: Consider the Bernstein activation function \(\sigma(x;l,u,\mathbf{c})\) of arbitrary order \(n\). The following holds: 1. \(\left|\frac{d}{dx}\sigma(x;l,u,\mathbf{c})\right|\leq 2n\max_{k\in\{0,\ldots,n\}}|c _{k}|\), 2. \(\left|\frac{d}{dc_{i}}\sigma(x;l,u,\mathbf{c})\right|\leq 1\) for all \(i\in\{0,\ldots,n\}\). ``` 1:Given: Training Batch (\(\mathcal{X},\mathbf{t}\)) and input bounds \([\mathbf{l}^{(0)},\mathbf{u}^{(0)}]\) 2:Initialize all parameters 3:Set the learning rate \(\alpha\) 4:\(\triangleright\) Forward propagation 5:Set \(\mathbf{y}^{(0)}=\mathcal{X}\) 6:Set \(\mathcal{B}^{(0)}=[\mathbf{l}^{(0)},\mathbf{u}^{(0)}]\) 7:for\(i\) = 1....Ldo 8:if\(layer\)\(i\) is Bernstein activation then 9:\(\mathbf{l}^{(i)},\mathbf{u}^{(i)}\leftarrow\mathcal{B}^{(i-1)}\)\(\triangleright\) Store Input bounds of the Bernstein layer 10:for each neuron \(z\) in \(layer\)\(i\)do 11: Let \(\mathbf{c}^{(\mathbf{z})}_{\mathbf{z}}\) be the Bernstein coefficients for neuron \(z\) of the \(i\)-th layer 12:\(\mathcal{B}^{(i)}_{z}\leftarrow[\min_{j}c^{(i)}_{zj},\max_{j}c^{(i)}_{zj}]\) 13:endfor 14:\(\mathcal{B}^{(i)}\leftarrow[\mathcal{B}^{(i)}_{0},\mathcal{B}^{(i)}_{1},..., \mathcal{B}^{(i)}_{m}]\)\(\triangleright\)\(m\) denotes the number of neurons in layer \(i\) 15:else 16:\(\mathcal{B}^{(i)}\leftarrow\)IBP(\(\mathcal{B}^{(i-1)}\)) 17:endif 18:\(\mathbf{y}^{(i)}\leftarrow\) forward(\(\mathbf{y}^{(i-1)}\))\(\triangleright\) Regular forward step 19:endfor 20:\(\triangleright\) Backpropagation 21:Compute the loss function: \(\mathcal{L}(\mathbf{y}^{(L)},\mathbf{t})\) 22:Compute the gradients with respect to all model parameters (including Bernstein coefficients) 23:for each Parameter \(\theta\)do\(\triangleright\) Weights, biases, and Bernstein coefficients \(c_{k}\) 24:\(\theta\leftarrow\theta-\alpha\nabla_{\theta}\mathcal{L}\) 25:endfor ``` **Algorithm 1** Training step of an L-layer DeepBern-Net \(\mathcal{NN}\) Proposition 2.1 ensures that the gradients of the proposed Bernstein-based activation function depend only on the value of the learnable parameters \(\mathbf{c}=(c_{0},\ldots,c_{n})\). Hence, the gradients do not explode for \(x>1\). This feature is not enjoyed by the polynomial activation functions in [27] and leads to better stable training properties when the Bernstein polynomials are used as activation functions. Moreover, one can control these gradients by adding a regularizer-to the objective function-that penalizes high values of \(c_{k}\), which is common for other learnable parameters, i.e., weights of the linear layer. Proof of Proposition 2.1 is in Appendix A.1 ## 3 Bern-IBP: Certification using Bernstein Interval Bound Propagation ### Certification of global properties using Bern-IBP We consider the certification of global properties of NNs. Global properties need to be held true for the entire input domain \(\mathcal{D}\) of the network. For simplicity of presentation, we will assume that the global property we want to prove takes the following form: \[\forall\mathbf{y^{(0)}}\in\mathcal{D}\Longrightarrow y^{(L)}=\mathcal{N}\mathcal{ N}(\mathbf{y}^{(0)})>0 \tag{5}\] where \(y^{(L)}\) is a scalar output and \(\mathcal{N}\mathcal{N}\) is the NN of interest. Examples of such global properties include the stability of NN-controlled systems [28] as well as global individual fairness [8]. In this paper, we focus on the incomplete certification of such properties. In particular, we certify properties of the form (5) by checking the lower/upper bounds of the NN. To that end, we define the lower \(\mathcal{L}\) and upper \(\mathcal{U}\) bounds of the NN within the domain \(\mathcal{D}\) as any real numbers that satisfy: \[\mathcal{L}\left(\mathcal{N}\mathcal{N}(\mathbf{y}^{(0)}),\mathcal{D}\right)\leq \min_{\mathbf{y}^{(0)}\in\mathcal{D}}\mathcal{N}\mathcal{N}(\mathbf{y}^{(0)}),\qquad \mathcal{U}\left(\mathcal{N}\mathcal{N}(\mathbf{y}^{(0)}),\mathcal{D}\right)\geq \max_{\mathbf{y}^{(0)}\in\mathcal{D}}\mathcal{N}\mathcal{N}(\mathbf{y}^{(0)}) \tag{6}\] Incomplete certification of (5) is equivalent to checking if \(\mathcal{L}\left(\mathcal{N}\mathcal{N}(\mathbf{y}^{(0)}),\mathcal{D}\right)>0\). Thanks to the Enclosure of Range (Property 1) of DeepBern-Nets, one can check the condition \(\mathcal{L}\left(\mathcal{N}\mathcal{N}(\mathbf{y}^{(0)}),\mathcal{D}\right)>0\) in constant time, i.e., \(\mathcal{O}(1)\), by simply checking the minimum Bernstein coefficients of the output layer. ### Certification of local properties using Bern-IBP Local properties of NNs are the ones that need to be held for subsets \(S\) of the input domain \(\mathcal{D}\), i.e., \[\forall\mathbf{y^{(0)}}\in S\subset\mathcal{D}\Longrightarrow y^{(L)}=\mathcal{N }\mathcal{N}(\mathbf{y}^{(0)})>0 \tag{7}\] Examples of local properties include adversarial robustness and the safety of NN-controlled vehicles [29; 30; 31]. Similar to global properties, we are interested in incomplete certification by checking whether \(\mathcal{L}\left(\mathcal{N}\mathcal{N}(\mathbf{y}^{(0)}),S\right)>0\). The output bounds stored in the Bernstein activation functions are the worst-case bounds for the entire input domain \(\mathcal{D}\). However, for certifying local properties over \(S\subset\mathcal{D}\), we need to refine these output bounds on the given sub-region \(S\). To that end, for a Bernstein activation layer \(k\) with input bounds [\(\mathbf{l}^{(k)}\), \(\mathbf{u}^{(k)}\)] (computed and stored during training), we can obtain tighter output bounds thanks to the following subdivision property of Bernstein polynomials. **Property 2** (Subdivision [24]).: Given a Bernstein polynomial \(P_{n}^{[l,u]}(x)\) of degree \(n\) on the interval \([l,u]\), the coefficients of the same polynomial on subintervals \([l,\alpha]\) and \([\alpha,u]\) with \(\alpha\in[l,u]\) can be computed as follows. First, compute the intermediate coefficients \(c_{j}^{k}\) for \(k=0,...,n\) and \(j=k,...,n\) \[c_{j}^{k}=\left\{\begin{array}{ll}c_{j}&\text{if }k=0\\ (1-\tau)c_{j-1}^{k-1}+\tau c_{j}^{k-1}&\text{if }k>0\end{array}\right., \quad c_{i}^{\prime}=c_{i}^{i}\quad c_{i^{\prime}}^{\prime\prime}=c_{n}^{n-i} \quad i=0\ldots n,\] where \(\tau=\frac{\alpha-l}{u-l}\). Next, the polynomials defined on each of the subintervals \([l,\alpha]\) and \([\alpha,u]\) are: \[P_{n}^{[l,\alpha]}(x)=\sum_{k=0}^{n}c_{k}^{\prime}b_{n,k}^{[l,\alpha]}(x), \qquad P_{n}^{[\alpha,u]}(x)=\sum_{k=0}^{n}c_{k}^{\prime\prime}b_{n,k}^{[\alpha,u]}(x).\] Indeed, we can apply the Subdivision property twice to compute the coefficients of the polynomial \(P_{n}^{[\alpha,\beta]}\). Computing the coefficients on the subintervals allows us to tightly bound the polynomial using property 1. Therefore, given a DeepBern-Net trained on \(\mathcal{D}=[\boldsymbol{l}^{(0)},\,\boldsymbol{u}^{(0)}]\), we can compute tighter bounds on the subregion \(S=[\boldsymbol{l}^{(0)},\,\boldsymbol{\hat{u}}^{(0)}]\) by applying the subdivision property (Property 2) to compute the Bernstein coefficients on the sub-region \(S\), and then use the enclosure property (Property 1) to compute tight bounds on the output of the activation equivalent to the minimum and maximum of the computed Bernstein coefficients. We do this on a layer-by-layer basis until we reach the output of the NN. Implementation details of this approach is given in Appendix B. ## 4 Experiments **Implementation:** Our framework has been developed in Python, and is designed to facilitate the training of DeepBern-Nets and certify local properties such as Adversarial Robustness and certified training. We use PyTorch [32] for all neural network training tasks. To conduct our experiments, we utilized a single GeForce RTX 2080 Ti GPU in conjunction with a 24-core Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz. Only 8 cores were utilized for our experiments. ### Experiment 1: Certification of Adversarial Robustness The first experiment assesses the ability to compute tight bounds on the NN output and its implications for certifying NN properties. To that end, we use the application of adversarial robustness, where we aim to certify that a NN model is not susceptible to adversarial examples within a defined perturbation set. The results in [33; 34] show that state-of-the-art IBP algorithms fail to certify the robustness of NNs trained with Projected Gradient Descent (PGD), albeit being robust, due to the excessive errors in the computed bounds, which forces designers to use computationally expensive sound and complete algorithms. Thanks to the properties of DeepBern-Nets, the bounds computed by Bern-IBP are tight enough to certify the robustness of NNs without using computationally expensive sound and complete tools. To that end, we trained several NNs using the MNIST [35] and CIFAR-10 [36] datasets using PGD. We trained both Fully Connected Neural Networks (FCNN) and Convolutional Neural Networks (CNNs) on these datasets with Bernstein polynomials of orders \(2,3,4,5\), and \(6\). For detailed information regarding the model architectures, please refer to Appendix C.2. Further information about the training procedure can be found in Appendix C.1. #### 4.1.1 Formalizing adversarial robustness as a local property Given a NN model \(\mathcal{NN}:[0,1]^{d}\rightarrow\mathbb{R}^{o}\), a concrete input \(\boldsymbol{x_{n}}\), a target class \(t\), and a perturbation parameter \(\epsilon\), the adversarial robustness problem asks that the NN output be the target class \(t\) for all the inputs in the set \(\{\boldsymbol{x}\mid\|\boldsymbol{x}-\boldsymbol{x_{n}}\|_{\infty}\leq\epsilon\}\). In other words, a NN is robust whenever: \[\forall\boldsymbol{x}\in S(\boldsymbol{x_{n}},\epsilon)=\{\boldsymbol{x}\mid \|\boldsymbol{x}-\boldsymbol{x_{n}}\|_{\infty}\leq\epsilon\}\quad\implies \quad\mathcal{NN}(\boldsymbol{x})_{t}>\mathcal{NN}(\boldsymbol{x})_{i},\ i\neq t\] where \(\mathcal{NN}(\boldsymbol{x})_{t}\) is the NN output for the target class and \(\mathcal{NN}(\boldsymbol{x})_{i}\) is the NN output for any class \(i\) other that \(t\). To certify the robustness of a NN, one can compute a lower bound on the adversarial robustness \(\mathbb{L}_{\text{robust}}\) for all classes \(i\neq t\) as: \[\mathbb{L}_{\text{robust}}(\boldsymbol{x_{n}},\epsilon) =\min_{i\neq t}\left(\mathcal{L}\big{(}\mathcal{NN}(\boldsymbol{ x})_{t},S(\boldsymbol{x_{n}},\epsilon)\big{)}-\mathcal{U}\big{(}\mathcal{NN}( \boldsymbol{x})_{i},S(\boldsymbol{x_{n}},\epsilon)\big{)}\right) \tag{8}\] \[\leq\min_{i\neq t}\left(\min_{\boldsymbol{x}\in S(\boldsymbol{x_{ n}},\epsilon)}\mathcal{NN}(\boldsymbol{x})_{t}-\mathcal{NN}(\boldsymbol{x})_{i}\right) \tag{9}\] Indeed, the NN is robust whenever \(\mathbb{L}_{\text{robust}}>0\). Nevertheless, the tightness of the bounds \(\mathcal{L}(\mathcal{NN}(\boldsymbol{x})_{t},S(\boldsymbol{x_{n}},\epsilon))\) and \(\mathcal{U}(\mathcal{NN}(\boldsymbol{x})_{i},S(\boldsymbol{x_{n}},\epsilon))\) plays a significant role in the ability to certify the NN robustness. The tighter these bounds, the higher the ability to certify the NN robustness. #### 4.1.2 Experiment 1.1: Tightness of output bounds - Bern-IBP vs IBP For each trained neural network, we compute the lower bound on robustness \(\mathbb{L}_{\text{robust}}(\boldsymbol{x_{n}},\epsilon)\) using Bern-IBP and using state-of-the-art Interval Bound Propagation (IBP) that does not take into account the properties of DeepBern-Nets. In particular, for this experiment, we used auto_LiRPA[11], a tool that is part of \(\alpha\beta\)-CROWN[11]--the winner of the 2022 Verification of Neural Network (VNN) competition [34]. Figure 2 shows the difference between the bound \(\mathbb{L}_{\text{robust}}(\mathbf{x_{n}},\epsilon)\) computed by Bern-IBP and the one computed by IBP using a semi-log scale. The raw data for the adversarial robustness bound \(\mathbb{L}_{\text{robust}}(\mathbf{x_{n}},\epsilon)\) for both Bern-IBP and IBP is given in Appendix C.4. The results presented in Figure 2 clearly demonstrate that Bern-IBP yields significantly tighter bounds in comparison to IBP. Figure 2 also shows that for all values of \(\epsilon\), the bounds computed using IBP become exponentially looser as the order of the Bernstein activations increase, unlike the bounds computed with Bern-IBP, which remain precise even for higher-order Bernstein activations or larger values of \(\epsilon\). The raw data in Appendix C.4 provide a clearer view on the superiority of computing \(\mathbb{L}_{\text{robust}}(\mathbf{x_{n}},\epsilon)\) using Bern-IBP compared to IBP. #### 4.1.3 Experiment 1.2: Certification of Adversarial Robustness using Bern-IBP Next, we show that the superior precision of bounds calculated using Bern-IBP can lead to efficient certification of adversarial robustness. Here, we define the certified accuracy of the NN as the percentage of the data points (in the test dataset) for which an adversarial input can not change the class (the output of the NN). Table 1 contrasts the certified accuracy for the adversarially-trained (using 100-step PGD) DeepBern-Nets of orders 2, 4, and 6, using both IBP and Bern-IBP methods and varying values of \(\epsilon\). As observed by the table, IBP fails to certify the robustness of all the NNs. On the other hand, Bern-IBP achieved high certified accuracy for all the NNs with varying values of \(\epsilon\). Finally, we use the methodology reported in [11] to upper bound the certified accuracy using 100-step PGD attack. It is essential to mention that IBP's inability to certify the robustness of NNs is not unique to DeepBern-Nets. In particular, as shown in [33, 34], most certifiers struggle to certify the robustness \begin{table} \begin{tabular}{c c c c|c c|c c|c} \hline \hline \multirow{2}{*}{Dataset} & Model & \multirow{2}{*}{Test acc. (\%)} & \multirow{2}{*}{\(\epsilon\)} & \multicolumn{2}{c|}{IBP} & \multicolumn{2}{c|}{Bern-IBP} & U.B (PGD) \\ & & & & Time (s) & & Certified & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Certified} \\ & & & & & Acc. (\%) & & Acc. (\%) & Acc. (\%) & Acc. (\%) \\ \hline \multirow{4}{*}{MNIST} & CNNa\_4 & \multirow{4}{*}{97.229} & 0.01 & 3.45 & 0 & 1.43 & 88.69 & 95.97 \\ & CNNa\_4 & & 0.03 & 3.41 & 0 & 1.42 & 72.12 & 92.53 \\ & (190,426) & & 0.1 & 3.26 & 0 & 1.39 & 65.22 & 75.27 \\ & CNNb\_2 & & 0.01 & 4.38 & 0 & 2.07 & 80.21 & 95.42 \\ & (905,882) & & 97.14 & 0.03 & 4.58 & 0 & 2.11 & 56.49 & 90.57 \\ & & 0.1 & 4.61 & 0 & 1.97 & 72.35 & 78.6 \\ \hline \multirow{4}{*}{CIFAR-10} & CNNa\_6 & \multirow{4}{*}{46.77} & 1/255 & 3.29 & 0 & 1.82 & 27.74 & 33.53 \\ & (258,626) & & 2/255 & 3.25 & 0 & 1.83 & 33.49 & 35.81 \\ \cline{1-1} & CNNb\_4 & & 1/255 & 5.17 & 0 & 4.45 & 28.55 & 42.86 \\ \cline{1-1} & (1,235,994) & & 2/255 & 5.14 & 0 & 4.33 & 14.7 & 36.73 \\ \hline \hline \end{tabular} \end{table} Table 1: A comparison of certified accuracy and verification time for neural networks with Bernstein polynomial activations using both IBP and Bern-IBP methods and varying values of \(\epsilon\). The table also presents the upper bound on certified accuracy calculated using a 100-step PGD attack. The results highlight the superior performance of Bern-IBP in certifying robustness properties compared to IBP. Figure 2: A visual representation of the tightness of bounds computed using Bern-IBP compared to IBP. The figure shows the \(\log\) difference between \(\mathbb{L}_{\text{robust}}\) computed using Bern-IBP and IBP for NNs with varying orders of and different values of \(\epsilon\). The figure demonstrates the enhanced precision and scalability of the Bern-IBP method in computing tighter bounds, even for higher-order Bernstein activations and larger values of \(\epsilon\), as compared to the naive IBP method. of ReLU NNs when trained with PGD. This suggests the power of DeepBern-Nets, which can be efficiently certified--in a few seconds even for NNs with millions of parameters, as shown in Table 1--using incomplete certifiers thanks to the ability of Bern-IBP to compute tight bounds. ### Experiment 2: Certified training using Bern-IBP In this experiment, we demonstrate that the tight bounds calculated by Bern-IBP can be utilized for certified training, achieving state-of-the-art results. Although a direct comparison with methods from certified training literature is not feasible due to the use of Bernstein polynomial activations instead of ReLU activations, we provide a comparison with state-of-the-art certified accuracy results from the SOK benchmark [33] to study how effectively can Bern-IBP be utilized for certified training. We trained neural networks with the same architectures as those in the benchmark to maintain a similar number of parameters, with the polynomial order serving as an additional hyperparameter. The training objective adheres to the certified training literature [37], incorporating the bound on the robustness loss in the objective as follows: \[\min_{\theta}\mathop{\mathbb{E}}_{(\mathbf{x},y)\in(X,Y)}\bigg{[}(1-\lambda) \mathcal{L}_{\text{CE}}(\mathcal{NN}_{\theta}(\mathbf{x}),y;\theta)+\lambda \mathcal{L}_{\text{RCE}}(S(\mathbf{x},\epsilon),y;\theta))\bigg{]}, \tag{10}\] where \(\mathbf{x}\) is a data point, \(y\) is the ground truth label, \(\lambda\in[0,1]\) is a weight to control the certified training regularization, \(\mathcal{L}_{\text{CE}}\) is the cross-entropy loss, \(\theta\) is the NN parameters, and \(\mathcal{L}_{\text{RCE}}\) is computed by evaluating \(\mathcal{L}_{\text{CE}}\) on the upper bound of the logit differences computed[37] using a bounding method. For DeepBern-Nets, \(\mathcal{L}_{\text{RCE}}\) is computed using Bern-IBP during training, while the networks in the SOK benchmark are trained using CROWN-IBP [37]. Table 2 illustrates that employing Bern-IBP bounds for certified training yields state-of-the-art certified accuracy (certified with Bern-IBP) on these datasets, comparable to--or in many cases surpassing--the performance of ReLU networks. The primary advantage of using Bern-IBP lies in its ability to compute highly precise bounds using a computationally cheap method, unlike the more sophisticated bounding methods for ReLU networks, such as \(\alpha\)-Crown. For more details about the exact architecture of the NNs, please refer to Appendix C.2 ### Experiment 3: Tight reachability analysis of NN-controlled Quadrotor using Bern-IBP In this experiment, we study the application-level impact of using Bernstein polynomial activations in comparison to ReLU activations with respect to the tightness of reachable sets in the context of safety-critical applications. Specifically, we consider a 6D linear dynamics system \(\dot{x}=Ax+Bu\) representing a Quadrotor (used in [38; 39; 40]), controlled by a nonlinear NN controller where \(u=\mathcal{NN}(x)\). To ensure a fair comparison, both sets of networks are trained on the same datasets, using the same architectures and training procedures. The only difference between the two sets of networks is the activation function used (ReLU vs. Bernstein polynomial). After training, we perform reachability analysis with horizon \(T=6\) on each network using the respective bounding methods: Crown and \(\alpha\)-Crown for ReLU networks and the proposed Bern-IBP \begin{table} \begin{tabular}{c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{3}{c|}{MNIST Certified acc. (\%)} & \multicolumn{3}{c}{CIFAR-10 Certified acc. (\%)} \\ & \(\epsilon=0.1\) & \multicolumn{3}{c|}{\(\epsilon=0.3\)} & \multicolumn{3}{c|}{\(\epsilon=2/255\)} & \multicolumn{3}{c}{\(\epsilon=8/255\)} \\ & DeepBern-Net & SOK & DeepBern-Net & SOK & DeepBern-Net & SOK & DeepBern-Net & SOK \\ & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) \\ \hline FCNNa & **72** & 68 & **31** & 25 & **38** & 33 & **28** & 27 \\ FCNNb & **86** & 85 & **57** & 54 & **39** & 37 & **26** & 25 \\ FCNNc & **80** & 80 & **51** & 22 & **36** & 32 & **31** & 30 \\ CNNa & **95** & 95 & 82 & **88** & 45 & **46** & 31 & **34** \\ CNNb & **95** & 94 & 77 & **85** & **49** & 49 & **37** & 35 \\ CNNc & 87 & **89** & 72 & **87** & 38 & **51** & 32 & **38** \\ \hline \hline \end{tabular} \end{table} Table 2: A comparison of certified accuracy for NNs with Bernstein polynomial activations versus ReLU NNs as in the SOK benchmark [33]. The certified accuracy is computed using Bern-IBP for NNs with polynomial activations, and the method yielding highest certified accuracy as reported in SOK for ReLU NNs. The table highlights the effectiveness of Bern-IBP in achieving competitive certification while utilizing a very computationally cheap method for tight bound computation. for Bernstein polynomial networks. We compute the volume of the reachable sets after each step for each network. The results are visualized in Figure 3, comparing the error in the volume of the reachable sets for both ReLU and Bernstein polynomial networks. The error is computed with respect to the true volume of the reachable set for each network, which is computed by heavy sampling. As shown in Figure 3, using Bern-IBP on the NN with Bernstein polynomial can lead to much tighter reachable sets compared to SOTA bounding methods for ReLU networks. This experiment provides insights into the potential benefits of using Bernstein polynomial activations for improving the tightness of reachability bounds, which can have significant implications for neural network certification for safety-critical systems. ## 5 Related work Neural Network verification.NN verification is an active field of research that focuses on developing techniques to verify the correctness and robustness of neural networks. Various methods have been proposed for NN verification to provide rigorous guarantees on the behavior of NNs and detect potential vulnerabilities such as adversarial examples and unfairness. These methods use techniques such as abstract interpretation [13], Satisfiability Modulo Theory (SMT) [41], Reachability Analysis [14; 42] and Mixed-Integer Linear Programming (MILP) [43; 44; 45; 46]. Many tools also rely on optimization and linear relaxation techniques [11; 12; 15] to speedup the verification. Another line of work [47; 48] uses higher order relaxation such as Bernstein Polynomials to certify NNs. However, frameworks for NN verification often result in loose bounds during the relaxation process or are computationally expensive, particularly for large-scale networks. Polynomial activations.NNs with polynomial activations have been studied in [27]. Theoretical work was established on their expressiveness [49] and their universal approximation property [50] is established under certain conditions. However, to the best of our knowledge, using Bernstein polynomials in Deep NNs and their impact on NN certification has not been explored yet. Polynomial Neural Networks.A recent work[51] proposed a new class of approximators called \(\Pi\)-nets, which is based on polynomial expansion. Empirical evidence has shown that \(\Pi\)-nets are highly expressive and capable of producing state-of-the-art results in a variety of tasks, including image, graph, and audio processing, even without the use of non-linear activation functions. When combined with activation functions, they have been demonstrated to achieve state-of-the-art performance in challenging tasks such as image generation, face verification, and 3D mesh representation learning. A framework for certifying such networks using \(\alpha\)-convexification was introduced in [52]. ## 6 Discussion and limitations Societal impact.The societal impact of utilizing Bernstein polynomial activations in neural networks lies in their potential to enhance the reliability and interpretability of AI systems, enabling improved safety, fairness, and transparency in various real-world applications. Figure 3: (Left) The trajectory of the Quadrotor for the ReLU and Bernstein polynomial networks. (Right) the error in the reachable set volume \(e=(\hat{V}-V)/V\) for each of the networks after each step. \(\hat{V}\) is the estimated volume using the respective bounding method and \(V\) is the true volume of the reachable set using heavy sampling Limitations.While Bernstein polynomials offer advantages in the context of certification, they also pose some limitations. One limitation is the increased computational complexity during training compared to ReLU networks.
2306.06327
Any-dimensional equivariant neural networks
Traditional supervised learning aims to learn an unknown mapping by fitting a function to a set of input-output pairs with a fixed dimension. The fitted function is then defined on inputs of the same dimension. However, in many settings, the unknown mapping takes inputs in any dimension; examples include graph parameters defined on graphs of any size and physics quantities defined on an arbitrary number of particles. We leverage a newly-discovered phenomenon in algebraic topology, called representation stability, to define equivariant neural networks that can be trained with data in a fixed dimension and then extended to accept inputs in any dimension. Our approach is user-friendly, requiring only the network architecture and the groups for equivariance, and can be combined with any training procedure. We provide a simple open-source implementation of our methods and offer preliminary numerical experiments.
Eitan Levin, Mateo Díaz
2023-06-10T00:55:38Z
http://arxiv.org/abs/2306.06327v2
# Any-dimensional equivariant neural networks ###### Abstract Traditional supervised learning aims to learn an unknown mapping by fitting a function to a set of input-output pairs with a _fixed dimension_. The fitted function is then defined on inputs of the same dimension. However, in many settings, the unknown mapping takes inputs in _any dimension_; examples include graph parameters defined on graphs of any size and physics quantities defined on an arbitrary number of particles. We leverage a newly-discovered phenomenon in algebraic topology, called representation stability, to define equivariant neural networks that can be trained with data in a fixed dimension and then extended to accept inputs in any dimension. Our approach is user-friendly, requiring only the network architecture and the groups for equivariance, and can be combined with any training procedure. We provide a simple open-source implementation of our methods and offer preliminary numerical experiments. ## 1 Introduction Researchers are often interested in learning mappings that are defined for inputs of different sizes. We call such objects "_free_" as they are dimension-agnostic. Free objects are pervasive in a range of scientific and engineering fields. For example, in physics, quantities like electromagnetic fields are defined for any number of particles. In signal processing, problems such as deconvolution are well-defined for signals of any length, while image and text classification tasks are meaningful regardless of the input size. In mathematics, norms of vectors and matrices are defined for any dimension; sorting algorithms handle arrays of any length, and graph parameters, such as the max-cut value, are defined for graphs of any size. In contrast, most existing supervised learning methods aim to learn mappings by fitting a parametrized function \(\widehat{f}\colon\mathbb{R}^{d}\to\mathbb{R}^{k}\) to a set of input-output pairs \(\mathcal{S}=\{(X_{1},y_{1}),\ldots,(X_{n},y_{n})\}\subseteq\mathbb{R}^{d} \times\mathbb{R}^{k}\) with fixed dimensions \(d\) and \(k\). Naturally, the function \(\widehat{f}\) is only defined on inputs of dimension \(d\), and _a priori_ cannot be applied to inputs in other dimensions. Practitioners use ad-hoc heuristics, such as downsampling, to resize the dimension of new inputs and match that of the learned map [15]. The lack of systematic methodology for applying learned functions to inputs of different sizes motivates the main question of this work: _How can we learn mappings that accept inputs in any dimension?_ We answer this question for mappings that are invariant or equivariant under the action of groups. Such mappings are ubiquitous since many application domains exhibit symmetry, e.g., physical quantities are equivariant under translations and rotations because they only depend on the relative position of particles [36], while graph parameters are equivariant under permutations because they only depend on the underlying topology rather than on its labelling [3]. We propose to learn such mappings using free equivariant neural networks. Formally, we consider a sequence of neural networks and a sequence of groups, such that a network at any level in the sequence is equivariant with respect to the group at the same level; see Figure 1. We train an equivariant network with data \(\mathcal{S}\) in a fixed dimension and leverage symmetry to extend it to other dimensions. **Core contributions.** The question posed above presents three key challenges. First, how do we encode infinite sequences of neural networks, one in each dimension, using only finitely-many parameters? Doing so is necessary if we want to learn from a finite amount of data. Second, how do we ensure that the networks we learn from data in a fixed dimension generalize well to higher dimensions? Third, is there a user-friendly procedure for learning these networks? We proceed to tackle these challenges. _Free equivariant neural networks._ Equivariance is the key ingredient in tackling the first challenge. The authors of [25] showed that the dimension of permutation-equivariant linear layers between tensors is independent of the size of the tensors, and gave free bases for such linear layers. For example, the space of permutation-equivariant linear layers for graph adjacency matrices is 15-dimensional for any \(n\), and its basis contains the identity, taking a transpose and extracting the diagonal, among others. The authors of that paper use this fact to derive finite parameterizations of linear layers \[W=\alpha_{1}A_{1}^{(n)}+\cdots+\alpha_{15}A_{15}^{(n)} \tag{1}\] where \(\left\{A_{1}^{(n)},\ldots,A_{15}^{(n)}\right\}\) is a basis of free equivariant maps for graphs on \(n\) nodes. This parameterization allows them to instantiate linear layers in any dimension using only the fifteen parameters \(\alpha_{i}\), which can be learned from data in a fixed dimension. Our **first contribution** is showing that this is not a coincidence but rather stems from a general, recently-identified phenomenon known as _representation stability_. We utilize this phenomenon to show that the dimension of equivariant linear layers stabilizes for a number of group actions, including those induced by (signed) permutations, the orthogonal groups \(\mathrm{O}(n)\), rotations \(\mathrm{SO}(n)\), and the Lorentz groups \(\mathrm{O}(1,n)\). We utilize this observation to derive finite parametrizations of infinite sequences of equivariant neural networks. _Generalization across dimensions._ The authors of [25] observed that neural networks obtained using (1) do not always generalize well to dimensions different from the one used for training. Intuitively, this is due to overparametrization, as there are many ways to represent the same function in a fixed dimension, but not all of them extend correctly to other dimensions. Our **second contribution** is to introduce a compatibility condition relating the mappings in different dimensions, which has a regularization effect that often leads to better generalization. Formally, it entails to the commutativity of the diagrams in Figure 1. We further explain how to impose this condition on the network architecture. _Computational recipe._ The authors of [25]_manually_ found the free basis elements in (1). However, for more complicated groups and spaces of linear layers, manually finding a free basis becomes prohibitive. In a fixed dimension, one can circumvent this issue using an algorithm proposed in [11] to compute a basis. For our **third contribution**, we extend this basis to any other dimension by solving a sparse linear system, enabling us to obtain free bases computationally. **Notation.** Given two finite-dimensional vector spaces \(U\) and \(V\) we use \(\mathcal{L}(V,U)\) to denote the set of linear mappings between them. For a group \(G\) acting on both \(U\) and \(V\), we denote invariant elements as \(U^{G}=\left\{u\in U\mid g\cdot u=u\right\}\), and equivariant linear maps as \(\mathcal{L}(V,U)^{G}=\left\{A\in\mathcal{L}(V,U)\mid gAg^{-1}=A\right\}.\) The map \(\mathcal{P}_{W}\colon U\to U\) denotes the projection onto a subspace \(W\subseteq U\). The symbols \(\mathbb{1}_{n}\) and \(I_{n}\) represent the all-ones vector and the identity in dimension \(n\), respectively. The symbol \(\mathbb{R}[G]\) denotes the set of finite linear combinations of elements in \(G\). To avoid cluttering notation, we use bold font symbols whenever possible to denote the \(n\)th element in a sequence, e.g., the symbol \(\mathbf{G}\) denotes \(G^{(n)}\). We add a "+" superscript to denote the \((n+1)\)th element, for instance, \(G^{(n+1)}\) is denoted by \(\mathbf{G}^{+}\). **Outline.** The remainder of this section focuses on related work. Section 2 formally defines free neural networks. Section 3 introduces a compatibility condition that ensures good generalization across different dimensions. In Section 4, we describe our computational recipe to learn free neural networks from data in a fixed dimension and extend to other dimensions. Section 5 provides numerical experiments. We close the paper in Section 6 with conclusions, limitations, and future work. We defer several technical details to the appendix. ### Related work **Equivariant learning.** The benefits of equivariant architectures first became apparent with the success of convolutional neural networks (CNNs) in computer vision [22]. Since then, equivariance has been applied to a range of applications. Examples include DeepSets [41] and graph neural networks [25, 40] using permutation equivariance to process sets and graphs; AlphaFold 2 [16] and ARES [33] using SE(3)-equivariance for protein and RNA structure prediction and assessment; steerable [8] and spherical [20] CNNs using rotation-equivariance to classify images; and physics-informed neural networks are equivariant under the symmetries of the corresponding physical systems [17]. Many of these architectures have been shown to implement a generalized notion of convolution over the groups at hand [7, 19]. Under additional assumptions, equivariant architectures have been derived using invariant theory to explicitly parametrize polynomial equivariant maps [36]. We refer the reader to [2, 24] for an introduction to equivariant deep learning. **Dimension-free learning.** Certain architectures processing data of a particular type are defined for inputs of different sizes. Many architectures processing graph-based data update the features at each vertex by applying the same function of the features of the vertex's neighbors, hence can be applied to graphs of any size [13, 35, 40]. Convolutional neural networks (CNNs) processing signals and images convolve their inputs with filters of constant size, hence can also be applied to inputs of different sizes. Nevertheless, it has been observed that naively applying CNNs to inputs of size different from that used during training leads to artifacts, and several downsampling techniques have been proposed to resize inputs to a CNN [15]. Networks processing natural language embed words as vectors of the same length but are defined for arbitrarily long sequences of such vectors. Recurrent neural networks process such sequences one-by-one, using cycles in their architectures to sequentially combine a new input vector with a function of the previous inputs [26]. Recursive neural networks apply the same weights recursively to pairs of input vectors to combine them into one, until the entire sequence is reduced to one vector [32]. Notably, all these architectures must process the input in particular ways motivated by the specific application at hand, whereas we only need to assume that the network processes its input equivariantly. **Representation stability.** Representation stability considers nested sequences of groups and their representation, and implies that such sequences of representations often stabilize. Specifically, there is a labelling of the irreducibles of these groups such that the decomposition of the representations in the sequence contain the same irreducibles with the same multiplicities. This phenomenon has been formalized in [5] and further studied in [6, 12, 28, 29, 31, 39]. It has been applied to study free convex sets and algebraic varieties [1, 4, 9, 23, 34], though to our knowledge, this paper is the first to apply it to equivariant deep learning. We refer the reader to [10, 30, 38] for introductions to this area. ## 2 Free neural networks In this section, we introduce the concept of free neural networks, i.e., networks that can be instantiated in every dimension. To set the stage, let us recall the classical notion of a neural network (NN). A NN is a mapping \(f=f_{L}\circ\ldots\circ f_{1}\) where \(f_{i}\colon V_{i}\to V_{i+1}\) is a composition \(f_{i}(x)=\sigma_{i}(W_{i}x+b_{i})\) of an affine map \(x\mapsto W_{i}x+b_{i}\) and an activation map \(\sigma_{i}\colon U_{i}\to V_{i+1}\). This yields a family of mappings parametrized by the weights \(W_{i}\in\mathcal{L}(V_{i},U_{i})\) and biases \(b_{i}\in U_{i}\). In turn, these parameters are chosen to minimize a prescribed loss function. In several settings, we want \(f\) to respect symmetries in its inputs. Formally, we require \(f\) to be equivariant with respect to the action of a group \(G\), i.e., \(f\circ g=g\circ f\) for all \(g\in G\). A natural way to obtain equivariance is by ensuring that each building block of \(f\) is equivariant, i.e., \(W_{i}\in\mathcal{L}(V_{i},U_{i})^{G}\), \(b_{i}\in U_{i}^{G}\), and \(\sigma_{i}\) is \(G\)-equivariant. NNs satisfying these properties are called _equivariant_. Equivariant NNs can be trained by computing a basis for the space of weights and biases and optimizing over the coefficients in this basis [11]. In this work, we seek equivariant NNs that extend to inputs in _any dimension_. To this end, we consider sequences \(\{\mathbf{U}_{i}\}\) and \(\{\mathbf{V}_{i}\}\) of nested1 vector spaces \(\mathbf{U}_{i}\subseteq\mathbf{U}_{i}^{+}\) and \(\mathbf{V}_{i}\subseteq\mathbf{V}_{i}^{+}\), with actions of a nested sequence of groups \(\{\mathbf{G}\}\), see Figure 1. For example, we have inclusions \(\mathbb{R}^{n}\subseteq\mathbb{R}^{n+1}\) by zero-padding on which the group of permutations on \(n\) letters \(\mathbf{G}=\mathrm{S}_{n}\) acts by permuting coordinates. These define a sequence of NNs whose weights \(\mathbf{W}_{i}\colon\mathbf{V}_{i}\to\mathbf{U}_{i}\) and biases \(\mathbf{b}_{i}\in\mathbf{U}_{i}\) increase in size. The activation functions \(\mathbf{\sigma}\) are often defined for every dimension, as the next section shows. Thus, we focus here on extending the linear layers. Footnote 1: Formally, we have embeddings \(\mathbf{V}\hookrightarrow\mathbf{V}^{+}\) and \(\mathbf{U}\hookrightarrow\mathbf{U}^{+}\). **Free equivariant networks.** Because the dimensions of the space of linear layers \(\mathcal{L}(\mathbf{U}_{i},\mathbf{V}_{i})\) and the vector spaces \(\mathbf{U}_{i}\) usually grow, sequences of _general_ NNs are not finitely-parameterizable and cannot be learned from a finite amount of data. The situation radically simplifies when considering equivariant networks since the dimensions of \(\mathcal{L}(\mathbf{U}_{i},\mathbf{V}_{i})^{\mathbf{G}}\) and \(\mathbf{U}_{i}^{\mathbf{G}}\) often stabilize. This was previously observed in the context of simple graph NNs by [25], and follows from a general phenomenon known as representation stability [5]. To illustrate it with a concrete example, consider the nested sequence \(\mathbf{U}=\mathbf{V}=\mathbb{R}^{n}\) with the action of \(\mathbf{G}=\mathrm{S}_{n}\) as above. Then \(\mathbf{V}^{\mathbf{G}}\) is one-dimensional and spanned by \(\mathbbm{1}_{n}\) while \(\mathcal{L}(\mathbf{U},\mathbf{V})^{\mathbf{G}}\) is two-dimensional and spanned by \(I_{n},\mathbbm{1}_{n}\mathbbm{1}_{n}^{\top}\), which are basis elements defined for every \(n\). Similarly, [25] obtained explicit free bases for spaces of invariant tensors. Interestingly, all the above basis elements project onto each other, e.g., the orthgonal projection Figure 1: Equivariant free neural networks. We use bold font to denote the \(n\)th element in a sequence; see Notation in Section 1 for details. of \(\mathbbm{1}_{n}\) onto \(\mathbb{R}^{n-1}\) is \(\mathbbm{1}_{n-1}\), and similarly \(I_{n},\mathbbm{1}_{n}\mathbbm{1}_{n}^{\top}\) project onto \(I_{n-1},\mathbbm{1}_{n-1}\mathbbm{1}_{n-1}^{\top}\). Motivated by this observation, we say that a sequence of equivariant NNs is _free_ if the weights \(\mathbf{W}_{i}\colon\mathbf{V}_{i}\to\mathbf{U}_{i}\) and biases \(\mathbf{b}_{i}\in\mathbf{U}_{i}\) satisfy \[\mathbf{W}_{i}=\mathcal{P}_{\mathbf{U}_{i}}\left.\mathbf{W}_{i}^{+}\right|_{\mathbf{V}_{i}}, \qquad\text{and}\qquad\mathbf{b}_{i}=\mathcal{P}_{\mathbf{U}_{i}}\mathbf{b}_{i}^{+}.\] (FreeNN) For many sequences \(\{\mathbf{V}_{i}\},\{\mathbf{U}_{i}\}\), equation (FreeNN) uniquely determines \(\mathbf{W}_{i}^{+},\mathbf{b}_{i}^{+}\) from \(\mathbf{W}_{i},\mathbf{b}_{i}\), allowing us to uniquely extend a network to accept larger inputs. **Representation stability.** To understand for which sequences (FreeNN) allows us to extend, we introduce some key definitions from the representation stability literature [5, 6]. **Definition 2.1** (Consistent sequences).: Fix a sequence of compact groups \(\mathcal{G}=\{\mathbf{G}\}_{n\in\mathbb{N}}\) such that \(\mathbf{G}\subseteq\mathbf{G}^{+}\). The family \(\mathcal{V}=\{(\mathbf{V},\mathbf{\varphi})\}_{n\in\mathbb{N}}\) is a _consistent sequence_ of \(\mathcal{G}\)-representations if the following hold true for all \(n\): 1. (**Representations**) The set \(\mathbf{V}\) is an orthogonal \(\mathbf{G}\)-representation; 2. (**Equivariant isometries**) The map \(\mathbf{\varphi}\colon\mathbf{V}\hookrightarrow\mathbf{V}^{+}\) is a \(\mathbf{G}\)-equivariant isometry. We will identify \(\mathbf{V}\) with its image \(\mathbf{\varphi}(\mathbf{V})\subseteq\mathbf{V}^{+}\), and omit the inclusions \(\mathbf{\varphi}\) unless needed. Importantly, consistent sequences can be combined to form more complex sequences. In particular, if \(\mathcal{V}=\{(\mathbf{V},\mathbf{\varphi})\}\) and \(\mathcal{U}=\{(\mathbf{U},\mathbf{\psi})\}\) are consistent sequences of \(\{\mathbf{G}\}\)-representations, then so are their sum and tensor product: \[\mathcal{V}\oplus\mathcal{U}=\{(\mathbf{V}\oplus\mathbf{U},\mathbf{\varphi}\oplus\mathbf{ \psi})\}\quad\text{ and }\quad\mathcal{V}\otimes\mathcal{U}=\{(\mathbf{V}\otimes\mathbf{U},\mathbf{\varphi} \otimes\mathbf{\psi})\}.\] This follows directly from Definition 2.1. We use \(\mathcal{V}^{\otimes k}\) (resp. \(\mathcal{V}^{\otimes k}\)) to denote the direct sum (resp., tensor product) of \(\mathcal{V}\) taken \(k\) times. As a direct by-product of this observation, we obtain that the sequence of linear maps \(\mathcal{L}(\mathcal{V},\mathcal{U})=\{(\mathcal{L}(\mathbf{V},\mathbf{U}),\mathbf{\varphi }\otimes\mathbf{\psi})\}\) is consistent since it is isomorphic to \(\mathcal{V}\otimes\mathcal{U}\). The following parameter controls the complexity of consistent sequences and ensures that their spaces of invariants are eventually isomorphic. **Definition 2.2** (Generation degree).: A consistent sequence \(\{\mathbf{V}\}\) is _generated in degree \(d\)_ if \(\mathbb{R}[\mathbf{G}]V^{(d)}=\mathbf{V}\) for all \(n\geq d\). The generation degree is the smallest such \(d\in\mathbb{N}\). The sequence is _finitely-generated_ if the generation degree is finite. In words, a consistent sequence \(\{\mathbf{V}\}\) is generated in degree \(d\) if for all \(n\geq d\), \(\mathbf{V}\) is equal to the span of linear combinations of elements of \(\mathbf{G}\) applied to \(V^{(d)}\). For instance, if \(\mathbf{V}=\mathbb{R}^{d}\) with the action of permutations \(\mathbf{G}=\mathrm{S}_{n}\), then its generation degree is one, as any \(v\in\mathbf{V}\) can be obtained from \(e_{1}=(1,0,\ldots,0)\in V^{(1)}\subseteq\mathbf{V}\) via \(v=\sum v_{i}\,g_{i}\cdot e_{1}\) where \(g_{i}\) swaps the the first and \(i\)th components. The next proposition gives an isomorphism between spaces of invariants in a finitely-generated consistent sequence. This result was first established in the representation stability literature, albeit in a different form. We include proof for completion. **Proposition 2.3** (Isomorphism of invariants).: _If \(\{\mathbf{V}\}\) is generated in degree \(d\), then the orthogonal projections \(\mathcal{P}_{\mathcal{V}}\colon(\mathbf{V}^{+})^{\mathbf{G}^{+}}\to\mathbf{V}^{\mathbf{G}}\) are injective for all \(n\geq d\), and isomorphisms for all large \(n\)._ Proof.: First, the projection \(\mathcal{P}_{\mathbf{V}}\) is \(\mathbf{G}\)-equivariant because \(\mathbf{G}\) acts orthogonally. Second, it maps \(\mathbf{G}^{+}\)-invariants in \(\mathbf{V}^{+}\) to \(\mathbf{G}\)-invariants in \(\mathbf{V}\) because \(\mathbf{G}\subseteq\mathbf{G}^{+}\). Third, we prove injectivity for \(n\geq d\). Suppose \(\mathcal{P}_{\mathbf{V}}v=0\) for some \(v\in(\mathbf{V}^{+})^{\mathbf{G}^{+}}\). For any \(u\in\mathbf{V}^{+}\), write \(u=\sum_{i}g_{i}u_{i}\) where \(u_{i}\in\mathbf{V}\) and \(g_{i}\in\mathbf{G}^{+}\). We then have \(\langle v,u\rangle=\langle v,\sum_{i}u_{i}\rangle=0\) because \(v\) is invariant and \(\sum_{i}u_{i}\in\mathbf{V}\). Since \(u\in\mathbf{V}^{+}\) was arbitrary, we conclude that \(v=0\). The injectivity of \(\mathcal{P}_{\mathbf{V}}\) shows that \(\dim(\mathbf{V})^{\mathbf{G}}\geq\dim(\mathbf{V}^{+})^{\mathbf{G}^{+}}\) for all \(n\geq d\), hence the sequence of dimensions \(\dim(\mathbf{V})^{\mathbf{G}}\) eventually stabilizes, at which point the projections become isomorphisms. Note that (FreeNN) precisely requires \(\mathbf{W}^{+}\) and \(\mathbf{b}^{+}\) to project to \(\mathbf{W},\mathbf{b}\) inside the consistent sequences \(\mathcal{L}(\mathcal{V}_{i},\mathcal{U}_{i}),\mathcal{U}_{i}\), respectively. Thus, Proposition 2.3 implies that for all large \(n\), we can parameterize infinite sequences of equivariant linear layers with a finite set of parameters \(\alpha_{1},\ldots,\alpha_{\ell}\), just as in (1). This result enables us to train free NNs as we describe in Section 4. **Examples.** We collect a few crucial examples and include additional ones in the appendix. _Scalar sequence._ The sequence \(\mathcal{S}=\{\mathbb{R}\}\) together with \(\mathbf{\varphi}(x)=x\) yields a consistent sequence for any sequence of groups acting trivially, i.e., \(g\cdot x=x\). It is generated in degree one. By setting the last layer \(\mathbf{V}_{L+1}=\mathbb{R}\) in the free NN architecture of Figure 1, we obtain a sequence of invariant functions \(\mathbf{f}\). _Permutation sequences._ Let \(\mathcal{R}=\{\mathbb{R}^{n}\}\) be the consistent sequence with zero-padding embeddings \(\mathbf{\varphi}(x)=(x^{\top},0)^{\top}\) and the action of the symmetric group \(\mathbf{G}=\mathrm{S}_{n}\) by permuting coordinates. Consider \(\mathcal{V}_{k}=\mathcal{U}_{k-1}=\mathcal{R}^{\otimes k}\), which is generated in degree \(k\). The dimensions of the space of weights and biases for low-order tensors are eventually \[\dim\mathcal{L}(\mathbf{V}_{1},\mathbf{U}_{1})^{\mathbf{G}}=4,\ \dim\mathbf{U}_{1}^{\mathbf{G}}=2,\ \ \dim\mathcal{L}(\mathbf{V}_{2},\mathbf{U}_{2})^{\mathbf{G}}=52,\ \ \text{and}\ \ \dim\mathbf{U}_{2}^{\mathbf{G}}=5.\] **When does stability kick in?** Understanding the exact level at which the projections become isomorphisms is important, yet, an exact characterization is rather technical. Stabilization occurs at the so-called **presentation degree** of the sequence, which might be larger than the generation degree in general. However, they agree for many relevant examples, such as in the sequence \(\mathcal{R}^{\otimes k}\) above. We include a formal definition and a discussion in Appendix B. ## 3 Generalization across dimensions and compatibility Free NNs often do not generalize correctly to other dimensions. In Section 5, we provide examples where a free NN yields excellent test error in its trained dimension while exhibiting errors \(10^{10}\) times worst in other dimensions. This discrepancy arises due to overparametrization, as there are numerous ways to encode the same function in a fixed dimension, and yet not every encoding extends seamlessly. In this section, we propose a regularization strategy that often leads to better generalization across dimensions. Specifically, inspired by a similar condition for free convex sets [23], we introduce the following compatibility condition. **Definition 3.1** (Compatible networks).: A sequence of maps \(\{\mathbf{f}\colon\mathbf{V}\to\mathbf{U}\}\) is (intersection-) _compatible_ if \(\mathbf{f}^{+}|_{\mathbf{V}}=\mathbf{f}\). A sequence of equivariant networks (Figure 1) is compatible if \[\mathbf{W}_{i}^{+}|_{\mathbf{V}_{i}}=\mathbf{W}_{i},\qquad\mathbf{b}_{i}^{+}=\mathbf{b}_{i},\qquad \text{and}\qquad\mathbf{\sigma}_{i}^{+}|_{\mathbf{U}_{i}}=\mathbf{\sigma}_{i}.\] (CompNN) Intuitively, this condition ensures that applying a function in a fixed dimension and then extending to a higher dimension is equivalent to first extending and then applying the function. Layer-wise compatibility (CompNN) guarantees that the sequence of NN \(\{\mathbf{f}\colon\mathbf{V}_{1}\to\mathbf{V}_{L+1}\}\), given by Figure 1, is compatible, or equivalently, that the diagram in that figure commutes. The condition (CompNN) is strictly stronger than (FreeNN), as compatible NNs are free, but free NNs are not compatible in general. **Compatibility in the wild.** Compatibility is a natural condition satisfied by sequences of equivariant maps \(\{\mathbf{f}\}\) arising in many problems of interest. Here we list some examples. _Graph parameters._ Graph invariants are \(\mathrm{S}_{n}\)-equivariant because they only depend on the underlying graph structure rather than on its labeling. Several graph invariants are also compatible with zero-padding, which is equivalent to adding an isolated vertex. For example, the max-cut value, the number of triangles, and cycles do not change if we add an isolated vertex, hence they are compatible. _Matrix mappings._ Linear algebra operations are often compatible. For instance, the experiments in [25] aimed to learn the following matrix mappings: \(X\mapsto\frac{1}{2}(X+X^{\top})\), \(X\mapsto\operatorname{diag}(\operatorname{diag}(X))\), \(X\mapsto\operatorname{tr}(X)\), and \(X\mapsto\operatorname{argmax}_{\|v\|=1}\|Xv\|\). All of these are compatible sequences with the zero-padding embedding described in Section 2, and \(\operatorname{S}_{n}\)-equivariant. _Orthogonal invariance._ The papers [11, 36] consider the task of learning the function \[\mathbf{f}(x_{1},x_{2})=\sin(\|x_{1}\|)-\frac{\|x_{2}\|^{3}}{2}+\frac{x_{1}^{\top}x _{2}}{\|x_{1}\|\|x_{2}\|}, \tag{2}\] defined on \((\mathbb{R}^{n})^{\oplus 2}\) with \(n=5\) from evaluation data. This function is \(\operatorname{O}(n)\)-invariant if we let rotations \(g\) act by \((g\cdot x_{1},g\cdot x_{2})\). Embedding \((\mathbb{R}^{n})^{\oplus 2}\) into \((\mathbb{R}^{n+1})^{\oplus 2}\) by zero-padding each vector, we see that \(\{\mathbf{f}\}\) is a compatible sequence of invariant functions. _Orthogonal equivariance._ The \(\operatorname{O}(3)\)-equivariant task in [11, 36] consists of taking as input \(n=5\) particles in space, given by their masses and position vectors \((m_{i},x_{i})_{i=1}^{n}\in(\mathbb{R}\oplus\mathbb{R}^{3})^{\oplus n}\), and outputting their moment of inertia matrix \(\mathbf{f}(m_{i},x_{i})=\sum_{i=1}^{n}m_{i}(x_{i}^{\top}x_{i}I_{3}-x_{i}x_{i}^{ \top}).\) Embedding \(\mathbf{V}=(\mathbb{R}\oplus\mathbb{R}^{3})^{\oplus n}\) into \(\mathbf{V}^{+}\) by zero-padding and letting \(\mathbf{U}=\mathbb{S}^{3}\), we see that \(\{\mathbf{f}\}\) is a compatible sequence of maps. Furthermore, let \(\mathbf{G}=\operatorname{S}_{n}\times\operatorname{O}(3)\) act on \(\mathbf{V}\) by \((\pi,g)\cdot(m_{i},x_{i})_{i=1}^{n}=(m_{\pi^{-1}(i)},gx_{\pi^{-1}(i)})_{i=1}^{n}\), and on \(\mathbf{U}\) by \((\pi,g)\cdot X=gXg^{-1}\) for \((\pi,g)\in\mathbf{G}\) (so permutations act trivially). Then each \(\mathbf{f}\) is \(\mathbf{G}\)-equivariant. There are many more examples of compatible mappings in the literature [23]. Therefore, we proceed to study compatible NNs. We derive conditions ensuring the compatibility of the linear layers and show that several standard activation functions are compatible. **Compatible linear layers.** Since we only have data in some finite level \(n_{0}\), we ask when do fixed-dimensional weights \(W_{i}^{(n_{0})}\) extend to a sequence satisfying (CompNN). The set of such \(W_{i}^{(n_{0})}\) forms a linear space, and the next theorem characterizes a subspace of it and enables us to find a basis for that subspace in Section 4. We defer its proof to Appendix C. **Assumption 1**.: The sequences \(\mathcal{V}=\{\mathbf{V}\},\ \mathcal{U}=\{\mathbf{U}\}\) are obtained from direct sums and tensor products of the same sequence \(\mathcal{V}_{0}\). The sequence \(\mathcal{V}\) is generated in degree \(d_{\mathrm{g}}\) and presented in degree \(d_{\mathrm{p}}\). The mapping \(W^{(n_{0})}\in\mathcal{L}(V^{(n_{0})},U^{(n_{0})})^{G^{(n_{0})}}\) at level \(n_{0}\geq d_{\mathrm{p}}\) satisfies \[W^{(n_{0})}(V^{(m)})\subseteq U^{(m)}\text{ for }m\leq d_{\mathrm{g}}.\] **Theorem 3.2**.: _Suppose that the sequences \(\mathcal{V}=\{\mathbf{V}\},\ \mathcal{U}=\{\mathbf{U}\}\) and linear map \(W^{(n_{0})}\in\mathcal{L}(V^{(n_{0})},U^{(n_{0})})^{G^{(n_{0})}}\) satisfy Assumption 1. Then there is a unique extension \(\{\mathbf{W}\}\) of \(W^{(n_{0})}\) to a sequence of equivariant linear maps satisfying (CompNN)._ Recall that we define the presentation degree in the appendix. In words, Theorem 3.2 says that if \(W^{(n_{0})}\) is equivariant and its restrictions to lower-dimensional subspaces satisfy (CompNN), then it uniquely extends to higher-dimensional weights satisfying (CompNN). In Section 4, we use this result to find a basis for such \(W^{(n_{0})}\), and use it to train compatible equivariant NNs. **Compatible activation functions.** The majority of equivariant nonlinearities proposed in the literature are compatible, such as the following few examples. Entrywise activations and permutations.Let \(\sigma\colon\mathbb{R}\to\mathbb{R}\) be any nonlinear function. Define \(\mathbf{\sigma}\colon(\mathbb{R}^{n})^{\otimes k}\to(\mathbb{R}^{n})^{\otimes k}\) by applying \(\sigma\) to each entry of a tensor. Then each \(\mathbf{\sigma}\) is \(\mathbf{G}=\mathrm{S}_{n}\)-equivariant and \(\{\mathbf{\sigma}\}\) is a compatible sequence. _Bilinear layers._ The authors of [11] map \((\mathbf{V}\otimes\mathbf{U})\oplus\mathbf{V}\to\mathbf{U}\) by sending \((v\otimes u,v^{\prime})\mapsto\langle v,v^{\prime}\rangle u\), which is equivariant for _any_ group action on \(\mathbf{V},\mathbf{U}\). This sequence of maps is also compatible. _Gated nonlinearities._ The authors of [37] map \(\mathbf{V}\oplus\mathbb{R}\to\mathbf{V}\) by \((v,\alpha)\mapsto v\sigma(\alpha)\) where \(\sigma\colon\mathbb{R}\to\mathbb{R}\) is a nonlinearity. These are equivariant maps for any group acting trivially on the \(\mathbb{R}\) component, and they form a compatible sequence. ## 4 A computational recipe for learning free neural networks In this section, we describe an algorithm to train free and compatible NNs. We do so in two stages. First, we train our NN at a large enough level \(n_{0}\) by finding bases for the weights and biases at that level and optimizing over the coefficients in this basis. Second, for any higher level \(n>n_{0}\), we extend the trained NN at level \(n_{0}\) by solving linear systems for the higher-dimensional weights and biases; for any lower level \(n<n_{0}\) we project the NN using (FreeNN). Throughout, we fix consistent sequences \(\mathcal{V}_{i},\mathcal{U}_{i}\) and sequences of compatible equivariant nonlinearities \(\{\mathbf{\sigma}_{i}\}\). We summarize our procedure in Algorithm 1. In Appendix D, we show that our procedure provably generates free and compatible NN. **Finding bases for free NN.** To train free networks (FreeNN), we fix \(n_{0}\)_exceeding the presentation degrees_ of \(\mathcal{V}_{i}\otimes\mathcal{U}_{i}\) and \(\mathcal{U}_{i}\) for all \(i=1,\dots,L\), and find a basis for equivariant weights and invariant biases at level \(n_{0}\) using the algorithm of [11]. Specifically, Theorem 1 in [11] states that \(b\in(U_{i}^{(n_{0})})^{G^{(n_{0})}}\) if and only if \(b\) satisfies that \[B\,b=0\quad\left(\text{for all }B\in\mathcal{B}_{i}^{(n_{0})}\right)\qquad\text{ and}\qquad(D-I)\,b=0\quad\left(\text{for all }D\in\mathcal{D}_{i}^{(n_{0})}\right) \tag{3}\] where \(\mathcal{B}_{i}^{(n_{0})}\) is a basis for the Lie algebra of \(G^{(n_{0})}\) and \(\mathcal{D}_{i}^{(n_{0})}\) are discrete generators for \(G^{(n_{0})}\), which are finite sets. Here \(B\) and \(D\) are represented as matrices acting on \(U_{i}^{(n_{0})}\). Since equivariant linear maps \(W\colon V_{i}^{(n_{0})}\to U_{i}^{(n_{0})}\) are just the invariants in \(\mathcal{L}(V_{i}^{(n_{0})},U_{i}^{(n_{0})})\), they constitute the kernel of the analogously-defined set of equations. Thus, finding a basis for equivariant weights and invariant biases reduces to finding a basis for the kernel of a matrix (3), which is typically large and sparse. This can be tackled either using a Krylov-subspace method [11] or a sparse LU decomposition [14, 21]. **Finding bases for Compatible NN.** If we rather want to find basis for layers satisfying (CompNN), we fix \(n_{0}\)_exceeding the presentation degrees_ of \(\mathcal{V}_{i}\) and let \(d_{i}\) be its generation degree. For most representations, (CompNN) implies zero bias, so we focus on the weights. We find a basis for weights \(W_{i}^{(n_{0})}\) satisfying the hypotheses of Theorem 3.2 by noting that Assumption 1 holds if and only if \(W_{i}^{(n_{0})}\) is \(G^{(n_{0})}\)-equivariant and satisfies \[\left[\mathcal{P}_{V_{i}^{(m)}}\otimes\left(I-\mathcal{P}_{U_{i}^{(m)}}\right) \right]\,\text{vec}\left(W_{i}^{(n_{0})}\right)=0\quad(\text{for all }m\leq d_{i}),\] where vec denotes the vectorization of a matrix by stacking its columns. This characterization allows us to find a basis for weights by finding a basis for the kernel of a sparse matrix. **Extending to arbitrary dimensions.** For any \(n>n_{0}\), we extend our trained network at level \(n_{0}\) by finding the unique equivariant weights and biases at level \(n\) projecting onto the trained ones as in (FreeNN). Formally, we find the unique \(W_{i}^{(n)}\) and \(b_{i}^{(n)}\) satisfying (3) with \(n_{0}\) replaced by \(n\) and \[\left[\mathcal{P}_{V_{i}^{(n_{0})}}\otimes\mathcal{P}_{U_{i}^{(n_{0})}}\right]\, \operatorname{vec}\left(W_{i}^{(n)}\right)=\operatorname{vec}\left(W_{i}^{(n_{0 })}\right),\quad\text{and}\qquad\mathcal{P}_{U_{i}^{(n_{0})}}\,b_{i}^{(n)}=b_{i }^{(n_{0})}.\] (ExtSyst) This amounts to solving a linear system, which is again typically sparse. It can be solved via iterative methods, e.g., stochastic gradient descent or LSQR [27]. For any \(n<n_{0}\), we set \(W_{i}^{(n)}\) and \(b_{i}(n)\) via (FreeNN), which is equivalent to (ExtSyst) with \(n\) and \(n_{0}\) swapped. ## 5 Numerical Experiments In this section, we use Algorithm 1 to learn some of the compatible mappings from Section 3. Our implementation is based on that of [11], and is available at [https://github.com/mateodd25/free-nets](https://github.com/mateodd25/free-nets). For each experiment, we aim to learn a sequence of mappings \(\{\boldsymbol{f}\colon\boldsymbol{V}_{1}\to\boldsymbol{V}_{L+1}\}\). To do so, we fix a level \(n_{0}\) and randomly generate data \(\{(X_{i},f^{(n_{0})}(X_{i}))\}\subseteq V_{1}^{(n_{0})}\times V_{L+1}^{(n_{0 })}\). To fit the data, we consider the architectures described in the Appendix E and compute bases for weights and biases as we describe in Section 4. We then optimize over the coefficients in those bases using ADAM [18] starting from random initialization. Finally, we extend the trained network at level \(n_{0}\) to several levels \(n\). We evaluate our extended network on random test data generated at each level from the ground-truth sequence of maps. We consider five examples from Section 3: the trace \(f(X)=\operatorname{tr}(X)\), diagonal extraction \(f(x)=\operatorname{diag}(\operatorname{diag}(X))\), symmetric projection \(f(X)=(X+X^{\top})/2\), the top right-singular vector \(f(X)=\operatorname{argmax}_{\|v\|=1}\|Xv\|\), and the function defined in (2). The first and last examples are invariant functions; the rest are equivariant mappings. For the first four examples, we set \(\boldsymbol{G}=\operatorname{S}_{n}\), and for the last one, we set \(\boldsymbol{G}=\operatorname{O}(n).\) We train with 3000 data points generated randomly and evaluate the test error at each dimension on 1000 fresh samples. We use the mean squared error (MSE) as our loss function for all the experiments except for learning the top singular vector. For that experiment, we use the squared sine loss proposed by [25], i.e., \(\ell(\widehat{y},y)=1-\langle\widehat{y},y\rangle^{2}/\|\widehat{y}\|^{2}\|y \|^{2}\). The resulting errors in each dimension for both free and compatible NNs are shown in Figure 2. The test errors at the trained dimension are competitive with those obtained in [11, 25, 36] for learning the same mappings. Due to memory limitations, we set the training level to \(n_{0}=3\) for the orthogonal invariance task, whereas Algorithm 1 would require \(n_{0}=6\) to uniquely extend free networks2 since the presentation degree of the linear maps between the hidden layers equal to six -- see Theorem D.1 in the appendix. This highlights an advantage of imposing compatibility -- it allows us to uniquely extend a trained network from lower-dimensional data. Moreover, we see that imposing our compatibility condition yields substantially lower errors across dimensions. Remarkably, the test error for free NNs increases by many orders of magnitude for simple functions, a phenomenon that was previously noted by [25]. This further underscores the importance of compatibility when generalizing to other dimensions. ## 6 Conclusions, limitations, and future work We leveraged representation stability to prove that a broad family of equivariant NNs extends to higher dimensions, which enables us to train an infinite sequence of NNs using a finite amount of data. Extending networks to higher dimensions often results in substantial test errors. To improve generalization, we introduced a compatibility condition relating networks across dimensions. We characterized compatibility and developed an algorithm to train free and compatible NNs. Finally, we applied our method to several numerical examples from the literature. In these examples, free NNs trained without imposing compatibility generalize poorly to higher dimensions, even when learning simple linear functions. In contrast, compatible NNs generalize significantly better. We hope to address several limitations of our approach in future work. First, while our compatibility condition improves generalization, it can be restrictive in certain settings. For example, it often yields zero bias. It would be interesting to learn the relation between maps in different dimensions directly from the data rather than assume it is known in advance. Second, for examples like the leading eigenvector, the error increases substantially when extending to higher dimensions, even if we impose compatibility. It would be interesting to improve generalization to higher dimensions in these examples by imposing additional compatibility conditions. Third, extending a trained network to higher dimensions in our examples involved solving large and sparse systems, which we currently do not fully exploit. Incorporating sparsity in the toolbox of [11] will enable training larger networks. Figure 2: Test errors across dimensions for free and compatible networks. Each experiment is run three times; the lighter bands show the max and min runs, while the bold line shows the average. For diagonal extraction and symmetric projection, we measure the average MSE per entry. Vertical gray lines mark the dimension used for learning. ## Acknowledgments We thank Venkat Chandrasekaran for his constant support and encouragement during the development of this work. Both authors were funded by Venkat Chandrasekaran through AFOSR grant FA9550-20-1-0320. MD was partially funded by Joel Tropp through ONR BRC Award N00014-18-1-2363, and NSF FRG Award 1952777.
2308.01682
Evaluating Link Prediction Explanations for Graph Neural Networks
Graph Machine Learning (GML) has numerous applications, such as node/graph classification and link prediction, in real-world domains. Providing human-understandable explanations for GML models is a challenging yet fundamental task to foster their adoption, but validating explanations for link prediction models has received little attention. In this paper, we provide quantitative metrics to assess the quality of link prediction explanations, with or without ground-truth. State-of-the-art explainability methods for Graph Neural Networks are evaluated using these metrics. We discuss how underlying assumptions and technical details specific to the link prediction task, such as the choice of distance between node embeddings, can influence the quality of the explanations.
Claudio Borile, Alan Perotti, André Panisson
2023-08-03T10:48:37Z
http://arxiv.org/abs/2308.01682v1
# Evaluating Link Prediction Explanations for Graph Neural Networks ###### Abstract Graph Machine Learning (GML) has numerous applications, such as node/graph classification and link prediction, in real-world domains. Providing human-understandable explanations for GML models is a challenging yet fundamental task to foster their adoption, but validating explanations for link prediction models has received little attention. In this paper, we provide quantitative metrics to assess the quality of link prediction explanations, with or without ground-truth. State-of-the-art explainability methods for Graph Neural Networks are evaluated using these metrics. We discuss how underlying assumptions and technical details specific to the link prediction task, such as the choice of distance between node embeddings, can influence the quality of the explanations. ## 1 Introduction Intelligent systems in the real world often use machine learning (ML) algorithms to process various types of data. However, graph data present a unique challenge due to their complexity. Graphs are powerful data representations that can naturally describe many real-world scenarios where the focus is on the connections among numerous entities, such as social networks, knowledge graphs, drug-protein interactions, traffic and communication networks, and more [9]. Unlike text, audio, and images, graphs are embedded in an irregular domain, which makes some essential operations of existing ML algorithms inapplicable [17]. GML applications seek to make predictions, or discover new patterns, using graph-structured data as feature information: for example, one might wish to classify the role of a protein in a biological interaction graph, predict the role of a person in a collaboration network, or recommend new friends in a social network. Unfortunately the majority of GML models are black boxes, thanks to their fully subsymbolic internal knowledge representation - which makes it hard for humans to understand the reasoning behind the model's decision process. This widely recognized fundamental flaw has multiple negative implications: \((i)\) difficulty of adoption from domain experts [19], \((ii)\) non-compliance to regulation, (e.g. GDPR) [14], \((iii)\) inability to detect learned spurious correlations [34], and \((iv)\) risk of deploying biased models [25]. The eXplainable Artificial Intelligence (XAI) research field tackles the problem of making modern ML models more human-understandable. The goal of XAI techniques is to extract from the trained ML model comprehensible information about their decision process. Explainability is typically performed _a posteriori_ - it is a process that takes place after the ML model has been trained, and possibly even deployed. Despite a growing number of techniques for explaining GML models, most of them target node and graph classification tasks [47]. Link Prediction (LP) is a paradigmatic problem, but it has been relatively overlooked from the explainability perspective - especially since it has been often ascribed to knowledge graphs. There are many ML techniques to tackle the LP problem, but the most popular approaches are based on an encoder/decoder architecture that learns node embeddings. In this case, LP explanations are based on the interaction of pairs of node representations. It is still not clear how different graph ML architectures affect the explainer's behavior, and in the particular case of link prediction we have observed how explanations can be susceptible to technical choices for the implementation of both the encoding and the decoding stages. Regarding the validation of explanation and explainers, few works have considered the study and evaluation of GML explainers for LP [20]. Furthermore, despite growing interest regarding the validation of explanations, there is currently no consensus on the adoption of any standard protocol or set of metrics. Given a formal definition for the problem of explaining link predictions, our Research Questions are therefore the following: * **RQ1** How can we validate LP explainers and measure the quality of their explanations? * **RQ2** What hidden characteristics of LP models can be revealed by the explainers? What can we learn about the different LP architectures, given the explanations to their decisions? In this paper, we propose a theoretical framing and a set of experiments for the attribution of GML models on the LP task, considering two types of Graph Neural Networks: Variational Graph Auto-Encoders (VGAE) [21] and Graph Isomorphism Networks (GIN) [42]. We first perform a validation of the explanation methods on synthetic datasets such as Stochastic Block Models and Watts-Strogatz graphs, where we can define the ground truth for the explanations and thus compute the confusion matrices and report sensitivity (TPR) and specificity (TNR) for the attribution results. For real-world datasets with no ground-truth (CORA, PubMed and DDI) [31, 39], we exploit an adaptation of the insertion/deletion curves, a technique originally designed to validate computer vision models [26] that allows to quantitatively compare the produced explanations against a random baseline by inserting/removing features and/or edges based on their importance with respect to the considered attribution method. ## 2 Related Work ### State-of-the-art Explainers for GML models Considering the blooming research in the field of XAI for GNNs, and the increasing quantity of new methods that are proposed, we refer to the taxonomy identified in Yuan et al. [47] to pinpoint the basic foundational principles underlying the different methods and choose few well-known models as representatives for broader classes of methods and use them in the remainder of the paper. Namely, we consider attribution methods based on perturbation methods, gradient-based approaches, decomposition - plus a hybrid one. A more detailed description of these classes and the selected explainers will be given in 3. We note that all these methods were originally discussed only in the context of node/graph classification. Perturbation-based explainers study the output variations of a ML model with respect to different input perturbations. Intuitively, when important input information is retained, the predictions should be similar to the original predictions. Existing methods for computer vision learn to generate a mask to select important input pixels to explain deep image models. Brought to GML, perturbation-based explainers learn masks that assign an importance to edges and/or features of the graph [24, 29, 48]. Arguably, the most widely-known perturbation-based explainer for GNNs is _GNNExplainer_[45]. Gradients/features-based methods decompose and approximate the input importance considering the gradients or hidden feature map values [35, 33, 52, 30, 32]. While other techniques just need to query the ML black-box at will (_model-agnostic_ methods), explainers of this class require access to the internal weights of the ML model, and are therefore labelled as _model-aware_ or _model-dependent_. Another popular way to explain ML models is decomposition methods, which measure the importance of input features by decomposing the final output of the model layer-by-layer according to layer-specific rules, up to the input layer. The results are regarded as the importance scores of the corresponding input features. ### Link Prediction Link prediction is a key problem for network-structured data, with the the goal of inferring missing relationships between entities or predict their future appearance. Like node and graph classification, LP models can exploit both node features and the structure of the network; typically, the model output is an estimated probability, for a non-existing link. Due to the wide range of real-world domain that can be modelled with graph-based data, LP can be applied to solve a high number of tasks. In social networks, LP can be used to infer social interactions or to suggest possible friends to the users [11]. In the field of network biology and network medicine, LP can be leverage in predicting results from drug-drug, drug-disease, and protein-protein interactions to advance the speed of drug discovery [1]. As a ML task, LP has been widely studied [22], and there exists a wide range of link prediction techniques. These approaches span from information-theoretic to clustering-based and learning-based; deep learning models represent the most recent techniques [50]. The idea of enriching link prediction models with semantically meaningful auxiliary information has been seldom explored for simpler models, such as recommender systems [7], or with hand-crafted feature extraction [12]. These approaches do not pair with the complex nature of deep GML models, where the feature extraction phase is part of the learning process, and models learn non-interpretable embeddings for each node. Finally, even though there are many LP approaches more advanced than VGAE and GIN, these two architectures are the base of many popular LP approaches [4] and should be sufficient for the evaluation of the selected explanation techniques. Regarding methods explicitly proposed to explain/interpret GNN-based LP models, Wang et al. [37] follow the intuition of focusing on the embeddings of pairs of nodes. Their explanations correspond to the attention scores of the aggregation step for the contexts interactions, and therefore they only give a first useful indication of important edges (and not features) for the prediction, but this preliminary information should be paired with a downstream explainer, as the authors point out. For Xie et al. [40], an explanation is a subgraph of the original graph that focuses on important edges, but ignores node features in the explanation, which are an important aspect in the decision process of a GNN. While the overall settings have differences, our work and their approach share the idea of considering embedding representations to produce graph explanations. The task of explaining LP black-box models has been considered in the context of Knowledge Graphs [27, 16], but KGs consider labeled relations that must be taken into account and contribute actively to the explanation. When considering unlabeled edges, a different approach for explaining the LP task is required. Regarding the LP frameworks that incorporate features such as distance encoding and hyperbolic encoding [51, 10, 43, 54], we believe that there should be a community-wide discussion about how such features can be incorporated in the proposed explanations. In our view, while these frameworks are very powerful for capturing features that are important for the LP task, none of the current attribution methods is able to assign an explanation to such features. Closely related to our work, recent attention has been posed onto the topic of systematically evaluating the produced explanations [28, 13, 3, 2], but exclusively for node/graph classification tasks. Here we fill the gap for the LP task. ## 3 Explaining Link Predictions Given a graph \(G=(V,E)\) with set of nodes \(V\) and set of edges \(E\subseteq 2^{V\times V}\), and a node-feature matrix \(X\in\mathbb{R}^{|V|\times F}\), link prediction estimates the probability that an unseen edge between two nodes \(i,j\in V\), \((i,j)\notin E\) is missing (e.g. when reconstructing a graph or predicting a future edge). Formally, a link prediction model is a function \(\phi_{G,X}:V\times V\mapsto[0,1]\) that given \(G\) and \(X\) maps every pair of nodes in \(V\) to a probability \(p\in[0,1]\). A common approach for LP tasks is to learn a node representation in a vector space (_encoder_\(\operatorname{Enc}_{G,X}:V\mapsto\mathbb{R}^{d}\)), and then estimate edge probability from pairwise distances in this latent space (_decoder_\(\operatorname{Dec}:\mathbb{R}^{d}\times\mathbb{R}^{d}\mapsto[0,1]\)). Most encoders are currently based on a message-passing mechanism that learns a latent representation of the nodes via a aggregate-update iterative mechanism. At each iteration, of each node in the graph receives messages coming from neighboring nodes (their current embeddings). The messages are then aggregated through a permutation-invariant function and the node embedding is subsequently updated using a non-linear learnable function of the current node embedding and the aggregated messages, such as a multi-layer perceptron [53, 8]. Decoders for link prediction usually compute a similarity function between node embeddings, such as the inner product between two node embeddings, followed by a normalization function, such as a sigmoid function, to obtain the probability of a link between the two nodes. ### Attribution methods for link prediction A LP explainer implements a function that, given an edge \((i,j)\) and a model to explain, maps the edges in \(E\) and the node features in \(X\) to their respective explanation scores. The higher the explanation score, the more important the edge (or the feature) is for the model to estimate the probability of \((i,j)\). For this work, we have selected representative LP explainers basing our choice on _(i)_ their belonging to different classes of the taxonomy described by Yuan et al. [47] to have a representative set of explainers, and _(ii)_ their adoption and availability of code. Namely, we consider attribution methods based on perturbation (_GNNExplainer_[23]), gradient-based approaches (_Integrated Gradients_ (IG) [36]), decomposition (_Deconvolution_[49]) - plus a hybrid one (_Layer-wise relevance propagation_ (LRP) [5]). _GNNExplainer_[23] searches for a subgraph \(G_{S}\), and a subset of features \(X_{S}\) of the original dataset \(G\), \(X\) that maximises the mutual information between the outputs of \(G,X\) and \(G_{S},X_{S}\). Since LP outputs a probability, the goal is reduced to finding a \(G_{S}\) that maximizes the probability of the model output while enforcing sparseness in \(G_{S}\). Therefore, explanation scores are defined as a mask on edges and node features. GNNExplainer provides explanations for LP with no change to its optimization goal, but the model's encoder and decoder must be plugged in so that the edge and feature masks can be properly estimated. In our setting, when a model predicts a link \((i,j)\), GNNExplainer learns a single mask over all links and features that are in the computation graphs of \(i\) and \(j\). _Integrated Gradients_ (IG) [36] is an axiomatic attribution method that aims to explain the relationship between a model's predictions in terms of its features. It to qubits to satisfy two axioms: sensitivity and implementation invariance, by analysing the gradients of the model with respect to its input features. In the case of link prediction, IG assigns positive and negative explanation scores to each link and each node feature, depending on how sensible the model's prediction is as these inputs change. _Deconvolution_[49], first introduced for the explanation of convolutional neural networks in image classification, is a saliency method that uses a deconvolution operation to perform a backward propagation of the original model. It allows to highlight which feature or edge is activated the most and the attribution output consists in positive and negative scores for edges and node features. _Layer-wise relevance propagation_ (LRP) [5] is based on a backward propagation mechanism applied sequentially to all layers of the model. For a target neuron, its score is represented as a linear approximation of neuron scores from the previous layer. Here, the model output score represents the initial relevance which is decomposed into values for each neuron of the underlying layers, based on predefined rules. In this paper we use the \(\varepsilon-stabilized\) rule as in [6]. To illustrate how the above attribution methods work in practice for LP, we start with a white-box message-passing model for link prediction on a toy example given by the graph shown in Figure 1(left) with 5 nodes and 3 edges, \(V=(a,b,x,y,z)\), \(E=\{(a,x),(a,y),(a,z)\}\) and a feature matrix \(X\) with two node features defined as \[X=\begin{bmatrix}0.5&0.5\\ 1&0\\ 1&0\\ 0&1\\ 0.5&0.5\end{bmatrix}. \tag{1}\] We define the embeddings \(e_{i}\in\mathbb{R}^{d},i\in V\) of the graph nodes as \[e_{ia}=\frac{1}{|\partial i|}\sum_{j\in\partial i}X_{ja}+X_{ia},\ a=1,\ldots,d, \tag{2}\] where \(\partial i\equiv\{j\in V:i\neq j,(i,j)\in E\}\) indicates the set of first neighbors to node \(i\). The probability of an edge between two nodes (the decoder) is the cosine similarity between the node embeddings. We ask to explain the prediction for link \((a,b)\). Here the edge \((a,x)\) has positive score because it "pulls" the embedding of \(a\) closer to \(b\), while the edge \((a,y)\) has negative score because it "pushes" the embedding of \(a\) away from \(b\). The edge \((a,z)\) is neutral. Figure 1 shows the edge explanations provided by GNNExplainer (center) and IG (right). IG is able to reflect the ground truth as it provides both positive and negative scores, while GNNExplainer considers positive masks only, thus returning a partial result. The explanations from Deconvolution and LRP are similar to the one produced with IG. ### Validating Explanations The validation of explanations is a generally overlooked topic in XAI, and LP tasks are no exceptions. Here, we suggest two different approaches, respectively to deal with ground-truth cases, and with no-ground-truth cases. When ground truth is available, we use metrics from information retrieval. The ground truth is defined as a binary mask over \(E\) and \(X\) where \((i,j)\) is true if the edge is important to the model prediction (and false otherwise), and a binary mask over features that follows the same logic. The explanation scores are binarized, fixing a standard threshold for the explanation scores, e.g. 0.5 for the positive defined masks of GNNExplainer and 0 for the other explainers considered here, or selecting the optimal threshold based on the ROC curve of true positive rate and false positive rate obtained varying the threshold, so that we can calculate a confusion matrix. True positives are considered when a high explanation score is assigned to an edge (or a feature) that is important according to the ground truth. False positives are considered when high explanation scores are assigned to non-important edges (or features). True negatives (and accordingly, false negatives) are considered when low explanation scores are assigned to unimportant (or important) edges or features. Finally, metrics such as precision, recall, specificity and sensitivity are calculated for each explainability technique. Here we focus on specificity and sensitivity, i.e., the true positive and true negative rates. When ground truth explanations are not available, we resort to a validation method borrowed from explainability for computer vision, proposed first by Petsiuk et al. [26], that we adapt for graph explanations. To the best of our knowledge this is the first time this validation method is used in this context. This method consists in progressively removing/inserting features and/or edges based on their importance with respect to the attribution method considered. The feature and edge attributions are sorted by decreasing score and in the _deletion_ case they are gradually removed. In the _insertion_ case, they are gradually inserted in decreasing order of score starting with no features/edges. Intuitively, if the explainer's output is correct, removing or adding the most important features will cause the greatest change in the model output. The area under the curve of the fraction of features inserted/removed versus the output of the model provides a quantitative evaluation of the explanation. To quantitatively compare different attribution methods, we define the following _area score_: referring to Figure 2, for the insertion case, consider the area \(A_{+}\) comprised between the explainer curve \(\gamma_{e}\) and the random curve \(\gamma_{r}\) when \(\gamma_{e}>\gamma_{r}\), and the area above \(\gamma_{r}\), U. The ratio \(\frac{A_{+}}{U}\in[0,1]\) describes the portion of the graph where the explainer performs better than the random baseline. Consider then the area \(A_{-}\) comprised between the explainer curve \(\gamma_{e}\) and the random curve \(\gamma_{r}\) when \(\gamma_{e}<\gamma_{r}\), and the area below \(\gamma_{r}\), L. The ratio \(\frac{A_{-}}{L}\in[0,1]\) corresponds to the portion of the graph where the explainer performs worse than the random baseline. We define the final score as \[s_{ins}\equiv\frac{A_{+}}{U}-\frac{A_{-}}{L}\in[-1,1]. \tag{3}\] Figure 1: Toy graph (left) and explanations (mask and attribution) for the link \((a,b)\) in the toy graph for GNNExplainer (center) and Integrated Gradients (right) attribution methods. The edge color in the right panel indicates positive (orange) and negative (blue) importance. The explanations produced by Deconvolution and LRP are similar to Integrated Gradients. Similarly, for the deletion case the score is given by \[s_{del}\equiv\frac{A_{-}}{L}-\frac{A_{+}}{U}\in[-1,1]. \tag{4}\] The area score is a summary metric for the insertion and deletion procedures, and reflects the ability of the explainer to assign higher scores to the most influential features/edges for the considered prediction. Ideally, a perfect explainer should give high scores to very few edges and/or features that carry almost all the information necessary for the prediction. In this case, inserting these features would be sufficient to recover the output of the model when all the features/edges are present, and deleting it would cause a great drop in the output of the model. In this case the area score would be equal or close to 1. In the case of random explanation scores, removing/inserting the features/edges with the highest score would not have, on average, a strong impact on the output of the model. In this case the area score would be 0. A negative value of the area score indicates a performance worse than the random baseline. Note that the absolute values of the area score for the insertion and deletion procedure are not directly comparable since the normalization is different. This score is particularly useful for comparing the performance of different explainers with respect to the random baseline under the same procedure. The deletion curve is closely related to the fidelity and sparsity metrics [47], but the area score has the advantage of providing a single metric that coherently summarizes the two for easier readability. The insertion curve complement the deletion curve, in the sense that instead of considering the distance between the original model output and the output obtained by iteratively removing the most important feature by explanation score, it considers the distance between the original model and the output obtained by starting with all null features and iteratively adding the most important features by explanation score. ## 4 Experiments In this section we report the results of evaluating LP explanations in two distinct scenarios - one with ground-truth explanations, and another without. In the first scenario, we use synthetic data, where graph datasets are generated along with their respective ground truth explanations for the created edges. This approach allows us to assess the explanations in a controlled setting, where we know the true explanations. In the second scenario, we turn to empirical data from three different datasets. Here, without the availability of ground-truth explanations, we assess the quality of explanations produced by the explanation methods through the area score defined in Section 3.2. This provides a means to measure the performance of explanation methods in real-world, less controlled conditions. Figure 2: Illustration of the area score for the feature insertion (left) and feature deletion (right) procedures. \(\gamma_{e}\) is the insertion (deletion) curve when features are sorted according to their explanation scores, while \(\gamma_{r}\) is the random insertion (deletion) curve. The area score for the insertion procedure is given by \(\frac{A_{+}}{U}-\frac{A_{-}}{L}\). The area score for the deletion procedure is \(\frac{A_{-}}{L}-\frac{A_{+}}{U}\). Our experiments consist of four steps: (i) dataset preparation, (ii) model training, (iii) attribution, and (iv) attribution evaluation. Edges are split into training and test sets, with the same proportion of positive and negative edges, and attributions are performed on the test set. To ensure reproducibility and fair comparison, we test all explainers with the same trained model and train-test sets for each dataset. Multiple realizations of the attribution process with different random seeds account for the stochastic nature of ML training. For each dataset, we consider 2 encoders (VGAE, GIN), 2 decoders (Inner product, Cosine distance), and 4 explainers (GNNExplainer, Integrated Gradients, Deconvolution, and LRP). In Fig. 3 we show an example of the explanations given by each of the explainers considered for a GIN network predicting a missing edge on the Watts-Strogatz dataset (see section 4.1). ### Synthetic Data We consider two generative models, namely the Stochastic Block Model (SBM) [18] and the Watts-Strogatz model (WS) [38], as examples of graphs where we can reconstruct the ground truth attributions for the link prediction task. In these experiments, whether an edge is present or not is clearly defined by the generative model. The small proportion of random edges introduced by Figure 3: Explanations for GIN trained on Watts-Strogatz graph data for the presence of an edge between the two nodes in red. The two nodes should be connected based on the triangle closure (green nodes). IG and Deconvolution give good explanations, while GNNExplainer considers important most of the computational graph, and LRP fails to identify the important edges. the two stochastic generative models are not used to evaluate the explainers. We assume that a model trained on a sufficient number of data points is able to reflect the logic of the generative model, therefore an explainer should reflect this aspect in its attributions. For the SBM, a link should be present if two nodes belong to the same block, while for WS a link should be predicted if two nodes belong to a triangle completion. The node features in both cases are simply the one-hot encodings of the node ids, i.e. \(X\) corresponds to the identity matrix. This is a common choice to use as node features in the absence of meaningful ones [46]. For both models, the experiments are designed as follows: we generate a graph \(G=(V,E)\) of given size \(|V|\), and we train a GML model for the LP task on a training fold of \(G\). Then, the explainer is asked to explain each edge in the test set (except for the random edges). We compare the attributions to the ground truth, computing a confusion matrix of the results. We get a score for each predicted edge, obtaining the error distribution for the explainer. For the sake of readability, we summarize the results with two metrics, namely specificity and sensitivity. Specificity measures the proportion of true negatives, that is, the number of edges that receive small importance from the explainer and that are in fact not important for the considered edge, over the number of true negatives; similarly, sensitivity is the ratio between predicted and true positives. Sensitivity and specificity together completely describe the quality of the attribution, but should not be considered separately. Figure 4 shows the sensitivity and specificity distributions for the four explainers tested on a GIN model trained on SBM (left) and WS (right) graphs. In the SBM case (left), GNNExplainer demonstrates better specificity than other explainers but suffers from poor sensitivity due to numerous true positives in an SBM block. This is because it tends to produce sparse masks, often missing many true positives. This issue is particularly evident in SBM, where explanations involve numerous nodes, while in the WS graph, GNNExplainer's performance is in line with other explainers due to the sparse explanation. The similarity measure used in the decoder significantly impacts the explanation quality. Common measures like cosine similarity pose challenges for current explainability techniques. Issues Figure 4: Sensitivity and specificity distributions on the Stochastic Block Model (top) and Watt-Strogatz (bottom) graphs for the four considered attribution methods: GNNExplainer (GE), Integrated Gradients (IG), Deconvolution (DC) and Layer-wise Relevance Propagation (LRP). arise when nodes become more similar as information is masked, leading to degenerate solutions like empty subgraphs and the masking of all features. Consequently, for explainers that search for a subgraph that maximizes the model output such as GNNExplainer, no edges or features are deemed important when using cosine similarity between node embeddings. To highlight the impact of the decoder in producing explanations, we show in Figure 5 the sensitivity and specificity distributions for the IG explainer (but the results are similar for all explainers) when applied to two different GNN models that differ only in the decoder: the first uses a inner product of the node embeddings followed by a sigmoid, and the second uses a cosine similarity decoder. The explanation quality drops drastically if the model uses the cosine distance. This metric, due to normalization, is prone to produce explanation scores that are close to zero. ### Empirical Data In this section we focus on the validation of explainability methods when ground truth explanations are not available. In order to do so, we consider three empirical datasets: Cora and PubMed [31], plus a drug-drug interaction (DDI) network obtained from DrugBank [39]. The graph \(G\) and the node features \(X\) are constructed according to Yang et al. [44]: the bag-of-words representation is converted to node feature vectors and the graph is based on the citation links. The Cora dataset has 2,708 scientific publications classified into seven classes, connected through 5,429 links. The PubMed dataset has 19,717 publications classified into three classes, connected through 44,338 links. Although originally introduced for the node classification task, these two datasets are common benchmarks for the evaluation of current state-of-the-art GML models for the LP task, allowing a precise comparison of the performance of the models that we are explaining. The DDI dataset has 1,514 nodes representing drugs approved by the U.S. Food and Drug Administration, and 48,514 edges representing interaction between drugs. The dataset does not provide node features, that are provided as node embedding vectors of fixed dimension 128 computed using Node2Vec [15]. For this dataset, link prediction is a critical task, aiming to anticipate potential drug-drug interactions that have yet to be observed. Predicting these interactions can mitigate their adverse effects and health risks, thereby promoting patient safety through preventive healthcare measures. We train two state-of-the art types of GNN encoders that are suitable for the link prediction task, namely the VGAE [21] and GIN [41]. For both encoder architectures, we use the inner product of node embeddings followed by a sigmoid as the decoder. Training the GNNs on CORA we reach an AUC test score of 0.952 (\(accuracy=0.744\)) for VGAE and an AUC test score of 0.904 (\(accuracy=0.781\)) for GIN. Training on PubMed we reach AUC test score of 0.923 (\(accuracy=0.724\)) for VGAE and AUC test score of 0.893 (\(accuracy=0.742\)) for GIN. Lastly, on DDI we reach AUC test score of 0.881 (\(accuracy=0.667\)) for VGAE and AUC test score of 0.920 (\(accuracy=0.751\)) for GIN. Once the model is trained we consider the edges in the test set and look at the explanation scores for node features and edges resulting from the attribution methods. These scores define insertion and deletion curves as described in Section 3.2. In Figure 6 we show Figure 5: Difference in the performance (specificity and sensitivity) of the IG attribution method for two GCN models with same encoder but different decoders, one based on scalar product and other based on cosine similarity, respectively. an example of insertion and deletion curves for node features and edges attributions obtained for a single edge predicted by a GIN model trained on CORA. The x axis refers to the ratio of edges/features that have been inserted/removed by attribution importance, and the y axis shows the variation in the model output, for the selected class, in presence/absence of these features or edges. Each curve represents a single explainer, plus a curve (in purple) that represents the random insertion/deletion baseline. The random baseline is computed by adding/removing features or edges at random without taking in consideration any attribution score. Many realization of the random curve are then averaged in order to obtain a robust baseline. We then compute the area score defined in 3.2 for all the considered attribution methods and all the edges in the test set. In Figure 7 we show the distribution of scores obtained from the CORA dataset with a GIN model. In Table 1 we report the results for all tested explainability methods for the three datasets using the GIN architecture as the encoder. The case of edge deletion/insertion is particularly interesting when comparing the two different GNN architectures. Even if they perform comparably on the task of link prediction, the area score for the attribution on the VGAE model drops drastically for all attribution methods, suggesting that most of the signal of the data is taken from the node features alone, while the GIN model shows a very different scenario, where also the edges, and thus the network structure, are important for the model. In Figure 8 we show the distribution of gain in the area score when inserting/deleting edges ordered by the IG mask versus random deletion, both for the VGAE and GIN models. We Figure 6: Examples of insertion and deletion curves for edges (top) and features (bottom) for a GIN model with Inner product decoder. can see that while the VGAE has almost no gain, the GIN model is consistently better than the random baseline. We obtained similar results for the other explainers (not shown). \begin{table} \begin{tabular}{|c|c||c|c||c|c||c|c|} \hline & & \multicolumn{2}{c||}{Cora} & \multicolumn{2}{c||}{PubMed} & \multicolumn{2}{c||}{DDI} \\ & & edge & feature & edge & feature & edge & feature \\ \hline \multirow{4}{*}{Insertion scores} & GE & \(0.13\pm 0.22\) & \(0.86\pm 32\) & \(0.28\pm 0.30\) & \(0.68\pm 0.32\) & \(0.93\pm 0.38\) & \(0.71\pm 0.31\) \\ & IG & \(\mathbf{0.72\pm 0.27}\) & \(\mathbf{0.96\pm 22}\) & \(\mathbf{0.89\pm 0.25}\) & \(\mathbf{0.92\pm 0.24}\) & \(\mathbf{0.96\pm 0.29}\) & \(\mathbf{0.88\pm 0.24}\) \\ & DC & \(0.67\pm 0.31\) & \(0.70\pm 0.27\) & \(0.77\pm 0.29\) & \(0.65\pm 0.30\) & \(0.92\pm 0.27\) & \(-0.12\pm 0.10\) \\ & LRP & \(0.02\pm 0.32\) & \(0.01\pm 0.40\) & \(0.03\pm 0.40\) & \(0.13\pm 0.41\) & \(-0.1\pm 0.35\) & \(-0.04\pm 0.28\) \\ \hline \hline \multirow{4}{*}{Deletion scores} & GE & \(0.08\pm 0.11\) & \(0.11\pm 0.20\) & \(0.03\pm 0.10\) & \(0.05\pm 0.20\) & \(0.22\pm 0.12\) & \(0.10\pm 0.14\) \\ & IG & \(\mathbf{0.23\pm 0.11}\) & \(\mathbf{0.31\pm 0.18}\) & \(\mathbf{0.26\pm 0.15}\) & \(\mathbf{0.26\pm 0.22}\) & \(\mathbf{0.43\pm 0.15}\) & \(\mathbf{0.19\pm 0.21}\) \\ \cline{1-1} & DC & \(0.23\pm 0.11\) & \(0.05\pm 0.18\) & \(0.24\pm 0.15\) & \(0.12\pm 0.11\) & \(0.38\pm 0.13\) & \(0.01\pm 0.15\) \\ \cline{1-1} & LRP & \(0.01\pm 0.25\) & \(0.01\pm 0.44\) & \(-0.11\pm 0.38\) & \(-0.36\pm 0.39\) & \(0.23\pm 0.31\) & \(0.03\pm 0.33\) \\ \hline \end{tabular} \end{table} Table 1: Area scores (median \(\pm\) std) for insertion (top) and deletion (bottom) for the selected explainers: GNNExplainer (GE), Integrated Gradients (IG), Deconvolution (DC) and Layer-wise Relevance Propagation (LRP). Area scores are calculated by taking into consideration the explanation scores produced by the explainers for LP models with GIN architecture as the encoder, trained with the three datasets: Cora, PubMed and DDI. Figure 7: Score distribution for feature and edge insertion and deletion. For each edge in the test set, we produce their respective insertion and deletion curves for features and edges, and calculate the area score according to the procedure illustrated in Figure 2. The violin plots show the distribution of scores for each explainability method: GNNExplainer (GE), Integrated Gradients (IG), Deconvolution (DC) and Layer-wise Relevance Propagation (LRP). ## 5 Discussion Of Findings In the previous section we devised different approaches for a quantitative comparison of explanation methods applied to the link prediction task.1 Synthetic data offers the advantage of having a ground truth available and complete control over its construction, but methods for a quantitative evaluation of real-world data, where no information is available _a priori_, are also necessary. For the latter we introduced the _area score_, a single-valued metric based on the insertion and deletion curves introduced in [26] that quantifies the gain in performance with respect to the random baseline when node features and/or edges are inserted/removed according to the attribution scores. Footnote 1: The complete source code is available at [https://github.com/cborile/eval_lp_xai](https://github.com/cborile/eval_lp_xai) IG performs better in all cases, and this is coherent with previous results on GCNs for node and graph classification tasks [28, 13]. Deconvolution is a good alternative. We note that GNNExplainer, despite the acceptable performance, needs to be trained, and its output is strongly dependent on the choice of its hyperparameters. This makes it difficult to use GNNExplainer as a plug-and-play method for the attribution of GNN models. It has the advantage of being model-agnostic, contrary to the other methods. Applied to the DDI dataset, the utility of the area score and the insertion and deletion curves is particularly clear, since the drug-drug interaction graph is much more dense than the other examples. When looking for the reason of a link prediction output obtained through a Black-Box GML model, there are normally too many neighboring edges contributing to the model output even for 1- or 2- layers Graph Neural Networks, i.e., GNNs that consider only 1- or 2-hop neighborhoods in their computation graphs. A good area score on the edges means that most of the neighboring edges can be discarded for explaining the model output, thus increasing the interpretability for experts of what drugs can explain the interaction between a new candidate drug and existing ones. Finally, we showed that technical details of the GNN black-box models can result in very different attributions for the same learned task, and even make some explanation methods completely inapplicable. Some of these details, like the choice of the distance function in the decoder stage, are inherent for the link prediction task and must be taken carefully into account when explainability is important. Also, different graph neural network architectures can result in drastic changes in the explanations, as some architectures can weigh more the network structure, while others can extract more signal from the node features. Figure 8: Area score difference between VGAE and GIN models for edge insertion (left) and deletion (right) on the CORA dataset. The VGAE model has almost no gain with respect to the random baseline, while the GIN model is consistently better. This is suggestive of a different exploitation of topological features, that is, edges, in the graph by different encoder architectures when learning the link prediction task. ## 6 Conclusions and Future Work We introduced quantitative metrics for evaluating GML model explanations in LP tasks using a synthetic dataset testbed with known ground truth and adapted insertion/deletion curves for empirical datasets. This provided metrics for validating attribution methods when ground truth is unavailable. We tested representative XAI methods on GML models with different architectures and datasets, and our metrics enabled comparison of LP explanations with each other and with random baselines. The thorough comparison of explanations we performed revealed hidden pitfalls and unexpected behaviors. For example, we identified cases where two models with similar performance produce drastically different explanations, and how seemingly minor choices, like embedding similarity in decoders, significantly impact explanations. The integration of feature and edge explanation scores, often overlooked in GML XAI, is a promising area for future research. We strongly advocate for comparative validation of XAI techniques, enabling informed selection of explainers, and we believe that the development of validation metrics and benchmarks is the first step towards establishing quantitative, comparative validation protocols for XAI techniques. This, in turn, would enable awareness in the choice of both GML models and explainers, and critical acceptance of the produced explanations. Besides its technical challenges, explainable LP is a task that might positively impact several real-world scenarios, spanning from social networks, to biological networks and financial transaction networks. Each of these application domains displays unique characteristics and behaviors, both on the pragmatical and semantic level, and might therefore require the careful selection of an explainer in order to trust the final explanation. A pipeline that seamlessly integrates a GML model with an explainer, combining results of both model performance and explanation accuracy with the area score, might help mitigate the well-known black-box problems: difficulty of adoption from domain experts and debugging from developers, legal risk of non-compliance to regulation, and moral risk of inadvertently deploying biased models.
2310.00926
Integration of Graph Neural Network and Neural-ODEs for Tumor Dynamic Prediction
In anti-cancer drug development, a major scientific challenge is disentangling the complex relationships between high-dimensional genomics data from patient tumor samples, the corresponding tumor's organ of origin, the drug targets associated with given treatments and the resulting treatment response. Furthermore, to realize the aspirations of precision medicine in identifying and adjusting treatments for patients depending on the therapeutic response, there is a need for building tumor dynamic models that can integrate both longitudinal tumor size as well as multimodal, high-content data. In this work, we take a step towards enhancing personalized tumor dynamic predictions by proposing a heterogeneous graph encoder that utilizes a bipartite Graph Convolutional Neural network (GCN) combined with Neural Ordinary Differential Equations (Neural-ODEs). We applied the methodology to a large collection of patient-derived xenograft (PDX) data, spanning a wide variety of treatments (as well as their combinations) on tumors that originated from a number of different organs. We first show that the methodology is able to discover a tumor dynamic model that significantly improves upon an empirical model which is in current use. Additionally, we show that the graph encoder is able to effectively utilize multimodal data to enhance tumor predictions. Our findings indicate that the methodology holds significant promise and offers potential applications in pre-clinical settings.
Omid Bazgir, Zichen Wang, Ji Won Park, Marc Hafner, James Lu
2023-10-02T06:39:08Z
http://arxiv.org/abs/2310.00926v2
# Integration of Graph Neural Network and Neural-ODEs for Tumor Dynamic Prediction ###### Abstract In anti-cancer drug development, a major scientific challenge is disentangling the complex relationships between high-dimensional genomics data from patient tumor samples, the corresponding tumor's organ of origin, the drug targets associated with given treatments and the resulting treatment response. Furthermore, to realize the aspirations of precision medicine in identifying and adjusting treatments for patients depending on the therapeutic response, there is a need for building tumor dynamic models that can integrate both longitudinal tumor size as well as multimodal, high-content data. In this work, we take a step towards enhancing personalized tumor dynamic predictions by proposing a heterogeneous graph encoder that utilizes a bipartite Graph Convolutional Neural network (GCN) combined with Neural Ordinary Differential Equations (Neural-ODEs). We applied the methodology to a large collection of patient-derived xenograft (PDX) data, spanning a wide variety of treatments (as well as their combinations) on tumors that originated from a number of different organs. We first show that the methodology is able to discover a tumor dynamic model that significantly improves upon an empirical model which is in current use. Additionally, we show that the graph encoder is able to effectively utilize multimodal data to enhance tumor predictions. Our findings indicate that the methodology holds significant promise and offers potential applications in pre-clinical settings. ## 1 Introduction In the development of novel anti-cancer therapies, patient-derived xenografts (PDX) have become an important platform for addressing key questions, such as evaluating the treatment response to therapeutic agents and the combinations thereof, identifying the relevant biomarkers of response and the mechanisms of resistance development Byrne et al. (2017). Furthermore, as PDX models are obtained by surgically removing patients' tumor and implanting them in mice, co-clinical avatar trials Byrne et al. (2017) can be performed, whereby treatment of the patients occur simultaneously to the treatment of the corresponding (pre-clinical) PDX models generated from the same patients. Such avatar studies could enable for real-time clinical decision making, to identify the best treatment(s) that the patients could respond to and help deliver some of the promises of precision medicine. Given the myriad applications of PDX models, an important computational task is the prediction of their dynamic response from the baseline -omics data and/or early tumor size data. This is a particularly challenging task due to the need to meld high-dimensional omics data measured on baseline (e.g., RNA-seq pre-treatment), with the low-dimensional but serially assessed tumor size measurements under treatment. While empirical Zwep et al. (2021) and spline-based Forrest et al. (2020) tumor dynamic models have been proposed, there has been little progress in melding such dynamic models with high dimensional -omics data. In Zwep et al. (2021), an machine learning (ML) approach has been proposed to use a model based on least absolute shrinkage and selection operator (LASSO) to predict tumor dynamic parameters from copy-number variations (CNVs) of genes from a large PDX data set consisting of various treatments Gao et al. (2015). In Ma et al. (2021), a few-shot learning (multi-layer perceptron) approach, and in Peng et al. (2022), a heterogeneous GCN approach, have been proposed to learn drug responses from in-vitro (cell-line) data and predict in-vivo (PDX) outcomes within two response categories. While the predictions made by the models in Ma et al. (2021) and Peng et al. (2022) show promising correlations with true responses, they do not necessarily capture the tumor dynamic, which is crucial for clinical decision-making. While the proposed method shows promise, improving the predictivity of the model and including omics data remains an important topic for further research. Over the past few years, Neural-Ordinary Differential Equations (NODEs) Chen et al. (2018) has emerged as a promising deep learning (DL) methodology for making predictions from irregularly sampled temporal data. Recently, a methodology based on NODE has been developed for tumor dynamic modeling in the clinical trial setting and the generated embeddings from patients' tumor data have been demonstrated to be effective for predicting their Overall Survival (OS) Laurie and Lu (2023). While the Tumor Dynamic NODE (TDNODE) has set the mathematical foundations for modeling longitudinal tumor data Laurie and Lu (2023), a methodology for the incorporation of high-dimensional, multimodal data into such models has yet to be developed. In this work, we propose a novel way to combine the previously developed tumor volume encoder with a heterogeneous graph encoder (see Fig. 1). The latter takes as inputs multimodal data consisting of drugs, disease and RNA-seq by incorporating graphs that encapsulate drug-gene associations, disease-gene associations and gene-gene interactions respectively. In application to the PDX data of Gao et al. (2015), we show that by leveraging the multimodal data including RNA-seq in conjunction with early tumor response data, the proposed methodology can significantly improve predictive performance of future tumor response. Finally, we summarize the findings and discuss the potential future applications of the proposed approach in enabling precision medicine in oncology. Figure 1: **Model architecture overview.****A)** Integration of GCNs and Neural-ODEs for tumor dynamic prediction. **B) Heterogeneous Graph Encoder:** two bipartite graph attention convolution NN to extract disease-gene, and drug-gene association associations, and a graph convolution NN to extract gene-gene interactions, and integrated into emdedding space in baseline level. Method Within our multi-modal framework, we constructed a multi-relational network using three large datasets covering interactions between drugs, genes, and diseases. We used RNA-seq data as node features within tissue-specific knowledge graphs and further integrated it with drug targets of treatments as a heterogeneous graph Zhang et al. (2019). This integration allowed us to learn an embedding that captures the complex relationships between genes, tumors, and treatments. The following 3 graphs were utilized to represent both biological and pharmacological knowledge: * **gene-gene graph:** tissue-specific functional gene networks obtained from TissueNexus Lin et al. (2022), which provides gene-gene associations based on gene expression across 49 human tissues. * **drug-gene graph:** obtained from DGIdb Cotto et al. (2018), which consolidates information on drug target and interacting genes from 30 disparate sources through expert curation and text mining. * **disease-gene graph:** obtained from DisGeNET Pinero et al. (2016), one of the largest collections of genes and variants associated with human diseases. For each PDX model, the pre-treatment embeddings using the above multi-modal graphs are used in conjunction with the early, observed tumor measurements to predict the future tumor dynamic profiles. The following subsections cover these two respective aspects. ### Heterogeneous Graph Encoder We formulate the PDX representation learning task as one of graph embedding, by fusing information from a heterogeneous network that incorporates drug, disease, and gene relationships. We apply multi-layer GCN Welling and Kipf (2016) in the RNA domain to model the gene interactions (_gene-gene graph_). Additionally, we use a bipartite graph attention convolutions Wang et al. (2020); Nassar (2018) for message passing from drugs to target genes (_drug-gene graph_), as well as gene embeddings to the disease domain (_disease-gene graph_). The mathematical formulation of our proposed framework is described as follows. **Bipartite Graphs Attention Convolution.** Conventional GCN assumes that all nodes belong to the same category. However, in our scenario there are heterogeneous attributes across various node types, such as genes, diseases, and drug targets. This limitation becomes evident when node attributes span across different domains. Consequently, we adopt a bipartite graph as a natural representation for modeling inter-domain interactions among distinct node types. We adapt GCN to operate within a bipartite graph, where node feature aggregation exclusively occurs over inter-domain edges. Specifically, let us denote graphs as \(\mathcal{G}=\left(\mathcal{V},\mathcal{E}\right)\), where \(\mathcal{V}\) represents the set of vertices, given by \(\left\{v_{i}\right\}_{i=1}^{N}\), and \(\mathcal{E}\) is the set of edges. We consider a bipartite graph \(\mathcal{BG}(\mathcal{U},\mathcal{V},\mathcal{E})\) defined as a graph \(\mathcal{G}(\mathcal{U}\cup\mathcal{V},\mathcal{E})\), where \(\mathcal{U}\) and \(\mathcal{V}\) represent two sets of vertices (nodes) corresponding to the two respective domains. Here, \(u_{i}\) and \(v_{j}\) denote the \(i\)-th and \(j\)-th node in \(\mathcal{U}\) and \(\mathcal{V}\), respectively, where \(i=1,2,\ldots,M\) and \(j=1,2,\ldots,N\). All edges within the bipartite graph exclusively connect nodes from \(\mathcal{U}\) and \(\mathcal{V}\) (i.e., \(\mathcal{E}=\{(u,v)|u\in\mathcal{U},v\in\mathcal{V}\}\)). The features of the two sets of nodes are denoted by \(X_{u}\) and \(X_{v}\), where \(X_{u}\in\mathbb{R}^{M\times P}\) is a feature matrix with \(\vec{x}_{u_{i}}\in\mathbb{R}^{P}\) representing the feature vector of node \(u_{i}\), and \(X_{v}\in\mathbb{R}^{N\times Q}\) is defined similarly. For the message passing \(\mathrm{MP}_{v\to u}\) from domain \(\mathcal{V}\) to \(\mathcal{U}\), we define a general bipartite graph convolution (_bg_) as: \[bg_{\mathcal{E}}(u_{i})=\rho\left(\mathrm{agg}\left(\{W_{u_{i},v_{j}}\vec{x}_{ v_{j}}|v_{j}\in\mathcal{N}_{u_{i}}^{\mathcal{E}}\}\right)\right), \tag{1}\] where \(\mathcal{N}_{u_{i}}^{\mathcal{E}}\) represents the neighborhood of node \(u_{i}\) connected by \(\mathcal{E}\) in \(\mathcal{BG}(\mathcal{U},\mathcal{V},\mathcal{E})\) (i.e., \(\mathcal{N}_{u_{i}}^{\mathcal{E}}\subset\mathcal{V}\)). \(W_{u_{i},v_{j}}\in\mathbb{R}^{M\times N}\) is a feature weighting kernel transforming \(N\)-dimensional features to \(M\)-dimensional features, the \(\mathrm{agg}\) is a permutation-invariant aggregation operation, and the \(\rho\) operator can be a non-linear activation function. In our work, we used element-wise mean-pooling and ReLU Nair and Hinton (2010) for \(\mathrm{agg}\) and \(\rho\) respectively. Our bipartite graph convolution layers utilize the graph attention network Velickovic et al. (2017) as the backbone on the node features, resulting in the bipartite graph attention convolution layer (\(bga\)). Since the attention mechanism considers features of two sets of nodes, we specifically define a learnable matrix \(W^{u}\in\mathbb{R}^{P\times S}\) (resp. \(W^{v}\in\mathbb{R}^{Q\times S}\)) for \(X_{u}\) (resp. \(X_{v}\)). The \(bga\) can be formulated as: \[bga_{\mathcal{E}}(u_{i})=\mathrm{ReLU}\left(\sum_{v_{j}\in\mathcal{N}^{ \mathcal{E}}_{u_{i}}}\alpha_{u_{i},v_{j}}W^{v}\vec{x}_{v_{j}}\right), \tag{2}\] The attention mechanism is a single-layer feedforward neural network, parameterized by a weight vector \(\vec{a}\) and applying the LeakyReLU non-linearity function. The attention weight coefficients can be expressed as: \[\alpha_{u_{i},v_{j}}=\frac{\exp\left(\rho(\vec{a}^{T}[W^{u}\vec{x}_{u_{i}}\|W^ {v}\vec{x}_{v_{j}}])\right)}{\sum_{v_{k}\in\mathcal{N}^{\mathcal{E}}_{u_{i}}} \exp\left(\rho(\vec{a}^{T}[W^{u}\vec{x}_{u_{i}}\|W^{v}\vec{x}_{v_{k}}])\right)}, \tag{3}\] where \(T\) and \(||\) represent the matrix transposition and concatenation operations respectively. **Information Fusion.** We represent the heterogeneous network as an undirected graph \(\mathcal{G}(\mathcal{V},\mathcal{E})\) with the following three sets of nodes: drugs (\(\mathcal{V}^{A}\)), diseases (\(\mathcal{V}^{B}\)), and genes (\(\mathcal{V}^{C}\)). The initial features of these three sets of nodes are denoted as \(X_{\mathcal{V}^{A}}\), \(X_{\mathcal{V}^{B}}\), and \(X_{\mathcal{V}^{C}}\), respectively. The edges consist of two inter-domain sets representing drug-gene associations (\(\mathcal{E}^{AC}\)) and disease-gene associations (\(\mathcal{E}^{BC}\)), as well as an intra-domain set representing gene network connections (\(\mathcal{E}^{C\mathcal{E}}\)). First, we apply multi-layer GCNs to the RNA domain to model the gene interactions. Importantly, the module parameters and gene node features are initialized by a pre-trained variational graph auto-encoder (VGAE). The VGAE was trained using the RNA-seq data from all the PDX models in this study, whereby each PDX model was represented as a graph with genes as the nodes of the graph. The tumor-specific graphs were obtained from TissueNexus Lin et al. (2022). In the second step, we apply a single bipartite graph attention convolution layer to propagate the message from drugs to target genes. Conceptually, this step can be viewed as projecting information from the macro level (e.g., from the domain of drugs) to the micro level (e.g., the domain of genes). To formulate the message-passing step, we represent the hidden embeddings of node \(v^{c}_{i}\) as \(h^{(k)}_{v^{c}_{i}}\), where \(k\) is the computational step and when \(k=0\), \(h^{(0)}_{v^{c}_{i}}=x_{v^{c}_{i}}\). The \(h^{(k)}_{v^{c}_{i}}\) is computed as follows: \[\mathrm{MP}^{(k)}_{\mathcal{V}^{A}\rightarrow\mathcal{V}^{C}}:h^{(k)}_{v^{c}_ {i}}=bga_{\mathcal{E}^{AC}}(v^{c}_{i})+h^{(k-1)}_{v^{c}_{i}}, \tag{4}\] Similarly in the last step, we utilize the non-linear graph information captured by gene nodes to update the hidden embeddings of the disease nodes. In particular, we apply another bipartite graph attention convolution layer to project gene embeddings to the disease domain. Therefore, the third step can be viewed as an attentional pooling of the disease gene subgraph. The updated feature representations of disease nodes are computed as follows: \[\mathrm{MP}^{(k)}_{\mathcal{V}^{C}\rightarrow\mathcal{V}^{B}}:h^{(k)}_{v^{b}_ {i}}=bga_{\mathcal{E}^{CB}}(v^{b}_{i})+h^{(k-1)}_{v^{b}_{i}}. \tag{5}\] We concatenate the updated drug and disease embeddings into a unified representation for PDX experiments (tumor-treatment combinations) as \(\beta_{1}\), which can be used as feature inputs for downstream tasks. ### Tumor Volume Prediction We formulate a tumor volume dynamic model using a Neural-ODE which utilizes both the embedding generated from the heterogeneous graph encoder, as well as the embedding generated from feeding early tumor volume data into an encoder. The model aims to utilize early tumor volume data together with baseline embedding (encapsulating drug, disease and RNA-seq data) in order to make personalized predictions. **Tumor Volume Encoder.** In a similar manner to Laurie and Lu (2023), we implemented a tumor volume encoder to inform the Neural-ODE in order to make predictions individualized to a specific PDX model. This recurrent neural network (RNN) based encoder maps a short window of early observed tumor volumes (of an arbitrary length) into an embedding that we denote as \(\beta_{2}\). **Neural-ODE.** In the clinical context, an approach to model tumor dynamic data using Neural-ODE has been developed, demonstrating the ability of such a formalism to discover the right dynamical model from longitudinal patient tumor size data across treatment arms Laurie and Lu (2023). In this work, we generalized the methodology to meld longitudinal measurements with the multi-modal data into a single deep learning framework and demonstrated in the setting of dynamic modeling of PDX data. We consider a dynamical system of the following form: \[\frac{dy(t)}{dt}=f_{\theta}(y(t),\beta),t\in[0,T] \tag{6}\] where \(0\) and \(T\) denote the start time of PDX experimentation and the end of the prediction time respectively, \(f_{\theta}\) is a neural network parameterized by a set of weights \(\theta\) to be learned across all PDX data, and \(\beta=[\beta_{1}||\beta_{2}]\) as the PDX embedding obtained by concatenating the outputs of the heterogeneous graph and tumor volume encoders respectively. Thus, a dynamical law represented by \(f_{\theta}\) is learned across all PDX data, with the concatenated embedding \(\beta\) serving to provide the initial condition for the Neural-ODE specific to the PDX model of interest. After simulating eqn. 6 to obtain the time evolution of state \(y(t)\), we then reconstruct the tumor volume data using a two-layer MLP reducer. The Neural-ODE approach has the benefits of handle variable-length sequences, accommodate varying observation intervals, and incorporate positional encoding for capturing temporal context. To enhance model simplicity and generalization, we keep the ODE system dimension no larger than the existing tumor growth inhibition (TGI) modelZwep et al. (2021) The TGI model captured the longitudinal tumor volume measurements, per PDX, with two empirical ODEs through estimation of three parameters: growth rate, treatment efficacy, and time-dependent resistance development (kr). **Loss Function.** As a measure of the overall fit of the model prediction to the observed tumor volume values, we incorporated the summation of the mean squared error (MSE) and the mean absolute percentage error (MAPE) De Myttenaere et al. (2016). The MSE is used to maintain the model accuracy, and MAPE is used to balance in capturing the tumor volume growth or shrinkage trends in the training set. ## 3 PDX dataset The primary dataset utilized in this study was obtained from a large-scale pre-clinical study conducted in PDX mice models, as detailed in Gao et al. (2015). This dataset encompassed more than 1000 PDX models, each characterized by their baseline mRNA expression levels prior to treatment. In total, the dataset covered 62 distinct treatments across six different diseases, with tumor volume measurements every 2-3 days. For our analysis, we included data from 191 unique tumors and 59 different treatments, resulting in a comprehensive dataset of 3470 PDX experiments (combinations of tumor and treatment) spanning 5 tumor types. This selection was based on the availability of RNA-seq data. ## 4 Results To assess the effectiveness of the heterogeneous graph encoder in describing changes in tumor volume, we trained the encoder using mRECIST Therasse et al. (2000); Gao et al. (2015) response labels. Specifically, we consolidated the response categories (CR, PR, and SD) into a single response category, and PD as the second category (Table A.1 in the Appendix). To predict these response categories, we used a binary classifier using a three-layer MLP neural network, that takes the heterogeneous graph encoder embedding as the input. The dataset was split by PDX models, and standard 5-fold cross-validation experiments were performed. We evaluated the prediction performance using three metrics: balanced accuracy, area under the receiver operating characteristic (AUROC), and F1-score. Our model's performance was compared against several competing approaches, including a non-graph-based deep learning method and traditional machine learning methods. To investigate the impact of the pre-training strategy, we also implemented a variant of our model with a randomly initialized gene graph. **GCNs outperforms baseline methods in modeling RNA-seq for treatment response prediction.** As shown in Table 1, GCN-based methods demonstrated superior performance in the responder classification task across all evaluation metrics. These models outperformed non-graph-based approaches by more than 9% in F1 score and 5% in AUROC. Additionally, we observed that our model achieved a noticeable improvement of approximately 4% in F1 score and 3% in AUROC when utilizing gene graph pre-training. This result highlights the effectiveness of graph reconstruction pre-training in enhancing gene latent representations. ### Tumor Volume Trace Fitting We aimed to evaluate capability of the Neural-ODE in capturing the longitudinal tumor volume data trend in comparison with the state-of-the-art TGI model for PDX proposed by Zweg et al. (2021). We utilized all available tumor volume data for each PDX experiment in this experiment. Our approach involved encoding the longitudinal tumor volume data into a latent space using an RNN-based encoder. Subsequently, we used this latent space as an part of the initial condition to solve an ODE system using the Neural-ODE model to reconstruct the dynamic tumor volume data. The population level results are shown in Panel (A) of Figure 2. Notably, the Neural-ODE model outperformed the TGI model with R2 of 0.96 compared to 0.71, and Spearman correlation of 0.96 compared to 0.86. ### Tumor Volume Trace Prediction We evaluated the ability of our model (presented in the Figure 1) to predict of future tumor volume dynamic based on a limited observed longitudinal tumor data. We selected the observation windows \begin{table} \begin{tabular}{l c c c} \hline Method & Balanced Accuracy & AUROC & F1 \\ \hline Pretrained GCN & **0.681** & **0.741** & **0.623** \\ GCN & 0.652 & 0.713 & 0.582 \\ MLP & 0.621 & 0.688 & 0.525 \\ SVM/RF & 0.618 & 0.676 & 0.532 \\ \hline \end{tabular} \end{table} Table 1: The summary of model performance on treatment response prediction using RNA-seq data. The mean over 5-fold cross validation is reported. Figure 2: **Tumor dynamic prediction: A)** A comparison on tumor volume trace fitting between tumor growth inhibition (TGI) model proposed by Zweg et al. (2021) and our proposed Neural-ODE system. **B)** Tumor volume trace prediction using the proposed model for 3 different PDXs on 3 different row of curves. The blue curves represent the ground-truth, the green curves represent our proposed model prediction, and the red curves represent the TGI model predictions. Each column shows, one observation window, and the highlighted blocks are the observations windows where our proposed model reaches to correct classification of mRECIST response category (same as the ground-truth). of 7, 14, 21, and 28 days, to simulate real-world scenarios where early observations are used to forecast the (future unseen) tumor volume trajectory. We assessed the predictive performance of our model in two ways. Firstly, we employed R2 to quantify the accuracy of our model in predicting unseen tumor volumes. The results in Table 2 indicate the following: 1) The embedding learned from the heterogeneous graph encoder enhances the predictive performance of our proposed model, and 2) as the observation window size increases, our proposed model captures the unseen tumor dynamic more accurately. Additionally, as it is demonstrated in Panel (B) of Figure 2, the model effectively captures the tumor dynamic trend. However the due to noise inherent in the tumor volume measurements and the clinical significance of mRECIST response category prediction, we also assessed our proposed model's predictive performance as a classifier. The mRECIST categories are derived from the predicted tumor volume time series by applying response criteria. This evaluation measures the model's performance in correctly classifying the treatment responses based on the predicted tumor volume dynamics. Figure 3 summarizes the classification results, revealing an observable trend in which incorporating the heterogeneous graph encoder embedding improves the prediction of response categories across all observation windows. ## 5 Conclusion In summary, we proposed a novel approach for tumor dynamic prediction that integrates RNA-seq, treatment, disease and longitudinal tumor volume data in an Neural-ODE system in a pre-clinical, PDX setting. We demonstrated that the use of Neural-ODE vastly improved the ability of the model to capture PDX tumor data than a previously proposed TGI model, as well as the benefit of adding the graph encoder to enrich the longitudinal data. As an area for further work, disentangling how the model predictions arise from the multimodal data using explainability techniques and/or attention weights is an important topic to advance our scientific understanding of the complex interplay between gene expression profiles, tumor location and drug targets. This methodology holds significant promise and warrants further evaluations, including in the clinical setting. \begin{table} \begin{tabular}{c c c} \hline Observation window & w/o graph encoder & w graph encoder \\ \hline 7 days & 0.233 & **0.302** \\ 14 days & 0.456 & **0.479** \\ 21 days & 0.586 & **0.608** \\ 28 days & 0.652 & **0.659** \\ \hline \end{tabular} \end{table} Table 2: Predictive performance of our proposed model using different observation windows quantified with R2. The mean over 5-fold cross validation is reported. Figure 3: Predictive performance of our proposed model as a classifier for mRECIST categories, with and without the heterogeneous graph encoder and considering different lengths of observation windows.
2301.08391
Brain Model State Space Reconstruction Using an LSTM Neural Network
Objective Kalman filtering has previously been applied to track neural model states and parameters, particularly at the scale relevant to EEG. However, this approach lacks a reliable method to determine the initial filter conditions and assumes that the distribution of states remains Gaussian. This study presents an alternative, data-driven method to track the states and parameters of neural mass models (NMMs) from EEG recordings using deep learning techniques, specifically an LSTM neural network. Approach An LSTM filter was trained on simulated EEG data generated by a neural mass model using a wide range of parameters. With an appropriately customised loss function, the LSTM filter can learn the behaviour of NMMs. As a result, it can output the state vector and parameters of NMMs given observation data as the input. Main Results Test results using simulated data yielded correlations with R squared of around 0.99 and verified that the method is robust to noise and can be more accurate than a nonlinear Kalman filter when the initial conditions of the Kalman filter are not accurate. As an example of real-world application, the LSTM filter was also applied to real EEG data that included epileptic seizures, and revealed changes in connectivity strength parameters at the beginnings of seizures. Significance Tracking the state vector and parameters of mathematical brain models is of great importance in the area of brain modelling, monitoring, imaging and control. This approach has no need to specify the initial state vector and parameters, which is very difficult to do in practice because many of the variables being estimated cannot be measured directly in physiological experiments. This method may be applied using any neural mass model and, therefore, provides a general, novel, efficient approach to estimate brain model variables that are often difficult to measure.
Yueyang Liu, Artemio Soto-Breceda, Yun Zhao, Phillipa Karoly, Mark J. Cook, David B. Grayden, Daniel Schmidt, Levin Kuhlmann1
2023-01-20T02:02:54Z
http://arxiv.org/abs/2301.08391v1
# Brain Model State Space Reconstruction Using an LSTM Neural Network ###### Abstract #### Objective Kalman filtering has previously been applied to track neural model states and parameters, particularly at the scale relevant to electroencephalography (EEG). However, this approach lacks a reliable method to determine the initial filter conditions and assumes that the distribution of states remains Gaussian. This study presents an alternative, data-driven method to track the states and parameters of neural mass models (NMMs) from EEG recordings using deep learning techniques, specifically a Long Short-Term Memory (LSTM) neural network. #### Approach An LSTM filter was trained on simulated EEG data generated by a neural mass model using a wide range of parameters. With an appropriately customised loss function, the LSTM filter can learn the behaviour of NMMs. As a result, it can output the state vector and parameters of NMMs given observation data as the input. #### Main Results Test results using simulated data yielded correlations with R squared of around 0.99 and verified that the method is robust to noise and can be more accurate than a nonlinear Kalman filter when the initial conditions of the Kalman filter are not accurate. As an example of real-world application, the LSTM filter was also applied to real EEG data that included epileptic seizures, and revealed changes in connectivity strength parameters at the beginnings of seizures. #### Significance Tracking the state vector and parameters of mathematical brain models is of great importance in the area of brain modelling, monitoring, imaging and control. This approach has no need to specify the initial state vector and parameters, which is very difficult to do in practice because many of the variables being estimated cannot be measured directly in physiological experiments. This method may be applied using any neural mass model and, therefore, provides a general, novel, efficient approach to estimate brain model variables that are often difficult to measure. LSTM Neural Network, Neural Mass Model, Neurophysiological Process Imaging, EEG, Epilepsy + Footnote †: journal: XXX XXX XXX XXX XXX ###### Contents * 1 Introduction * 2 Background of EEG * 2.1 Introduction * 3.1 Introduction * 3.2 Background of EEG * 3.3 Data-driven EEG * 3.4 Data-driven EEG * 3.5 Data-driven EEG * 3.6 Data-driven EEG * 3.7 Data-driven EEG * 3.8 Data-driven EEG * 3.9 Data-driven EEG * 3.1 Data-driven EEG * 3.1 Data-driven EEG * 3.2 Data-driven EEG * 3.3 Data-driven EEG * 3.4 Data-driven EEG * 3.5 Data-driven EEG * 3.6 Data-driven EEG * 3.7 Data-driven EEG * 3.8 Data-driven EEG * 3.9 Data-driven EEG * 3.1 Data-driven EEG * 3.1 Data-driven EEG * 3.2 Data-driven EEG * 3.3 Data-driven EEG * 3.4 Data-driven EEG * 3.5 Data-driven EEG * 3.6 Data-driven EEG * 3.7 Data-driven EEG * 3.8 Data-driven EEG * 3.9 Data-driven EEG * 3.1 Data-driven EEG * 3.1 Data-driven EEG * 3.2 Data-driven EEG * 3.3 Data-driven EEG * 3.4 Data-driven EEG * 3.5 Data-driven EEG * 3.6 Data-driven EEG * 3.7 Data-driven EEG * 3.8 Data-driven EEG * 3.9 Data-driven EEG * 3.1 Data-driven EEG * 3.2 Data-driven EEG * 3.3 Data-driven EEG * 3.1 Data-driven EEG * 3.2 Data-driven EEG * 3.3 Data-driven EEG * 3.4 Data-driven EEG * 3.5 Data-driven EEG * 3.6 Data-driven EEG * 3.7 Data-driven EEG * 3.8 Data-driven EEG * 3.9 Data-driven EEG * 3.10 Data-driven EEG * 3.11 Data-driven EEG * 3.11 Data-driven EEG * 3.12 Data-driven EEG * 3.13 Data-driven EEG * 3.14 Data-driven EEG * 3.15 Data-driven EEG * 3.16 Data-driven EEG * 3.17 Data-driven EEG * 3.18 Data-driven EEG * 3.19 Data-driven EEG * 3.20 Data-driven EEG * 3.21 Data-driven EEG * 3.22 Data-driven EEG * 3.23 Data-driven EEG * 3.24 Data-driven EEG * 3.25 Data-driven EEG * 3.26 Data-driven EEG * 3.27 Data-driven EEG * 3.28 Data-driven EEG * 3.29 Data-driven EEG * 3.30 Data-driven EEG * 3.31 Data-driven EEG * 3.329 Data-driven EEG * 3.33 Data-driven EEG * 3.34 Data-driven EEG * 3.35 Data-driven EEG * 3.36 Data-driven EEG * 3.37 Data-driven EEG * 3.38 Data-driven EEG * 3.39 Data-driven EEG * 3.40 Data-driven EEG * 3.41 Data-driven EEG * 3.42 Data-driven EEG * 3.43 Data-driven EEG * 3.44 Data-driven EEG * 3.45 Data-driven EEG * 3.46 Data-driven EEG * 3.47 Data-driven EEG * 3.48 Data-driven EEG * 3.49 Data-driven EEG * 3.410 Data-driven EEG * 3.411 Data-driven EEG * 3.411 Data-driven EEG * 3.412 Data-driven EEG * 3.413 Data-driven EEG * 3.414 Data-driven EEG * 3.415 Data-driven EEG * 3.416 Data-driven EEG * 3.417 Data-driven EEG * 3.418 Data-driven EEG * 3.419 Data-driven EEG * 3.420 Data-driven EEG * 3.4210 Data-driven EEG * 3.422 Data-driven EEG * 3.423 Data-driven EEG * 3.424 Data-driven EEG * 3.425 Data-driven EEG * 3.426 Data-driven EEG * 3.427 Data-driven EEG * 3.428 Data-driven EEG * 3.429 Data-driven EEG * 3.430 Data-driven EEG * 3.431 Data-driven EEG * 3.432 Data-driven EEG * 3.433 Data-driven EEG * 3.441 Data-driven EEG * 3.442 Data-driven EEG * 3.443 Data-driven EEG * 3.444 Data-driven EEG * 3.445 Data-driven EEG * 3.451 Data-driven EEG * 3.461 Data-driven EEG * 3.471 Data-driven EEG * 3.472 Data-driven EEG * 3.473 Data-driven EEG * 3.474 Data-driven EEG * 3.481 Data-driven EEG * 3.482 Data-driven EEG * 3.483 Data-driven EEG * 3.484 Data-driven EEG * 3.485 Data-driven EEG * 3.486 Data-driven EEG * 3.487 Data-driven EEG * 3.488 Data-driven EEG * 3.489 Data-driven EEG * 3.490 Data-driven EEG * 3.491 Data-driven EEG * 3.492 Data-driven EEG * 3.493 Data-driven EEG * 3.493 Data-driven EEG * 3.494 Data-driven EEG * 3.495 Data-driven EEG * 3.496 Data-driven EEG * 3.497 Data-driven EEG * 3.498 Data-driven EEG * 3.499 Data-driven EEG * 3.499 Data-driven EEG * 3.499 Data-driven EEG * 3.499 Data-driven EEG * 3.490 Data-driven EEG * 3.491 Data-driven EEG * 3.492 Data-driven EEG * 3.493 Data-driven EEG * 3.494 Data-driven EEG * 3.495 Data-driven EEG * 3.496 Data-driven EEG * 3.497 Data-driven EEG * 3.498 Data-driven EEG * 3.499 Data-driven EEG * 3.499 Data-driven EEG * 3.499 Data-driven EEG * 3.499 Data-driven EEG * 3.499 Data-driven EEG * 3.490 Data-driven EEG * 3.491 Data-driven EEG * 3.492 Data-driven EEG * 3.493 Data-driven EEG * 3.493 Data-driven EEG * 3.494 Data-driven EEG * 3.495 Data-driven EEG * 3.496 Data-driven EEG * 3.497 Data-driven EEG * 3.498 Data-driven EEG * 3.499 Data-driven EEG * 3.499 Data-driven EEG * 3.499 Data-driven EEG * 3.499 Data-driven EEG * 3.499 Data-driven EEG * 3.499 Data-driven EEG * 3.499 Data-driven EEG * 3.490 Data-driven EEG * 3.4910 Data-driven EEG * 3.4911 Data-driven EEG * 3.4911 Data-driven EEG * 3.4911 Data-driven EEG * 3.4911 Data-driven EEG * 3.492 Data-driven EEG * 3.493 Data-driven EEG * 3.493 Data-driven EEG * 3.494 Data-driven EEG * 3.494 Data-driven EEG * 3.495 Data-driven EEG * 3.496 Data-driven EEG * 3.497 Data-driven EEG * 3.4988 Data-driven EEG * 3.499 Data-driven EEG * 3.499 Data-driven EEG * 3.499 Data-driven EEG * 3.499 Data-driven EEG * 3.499 Data-driven EEG * 3.499 Data-driven EEG * 3.499 Data-driven EEG * 3.499 Data-driven EEG * 3.499 Data-driven EEG * 3.4991 Data-driven EEG * 3.499 Data-driven EEG * 3.499 Data-driven EEG * 3.4910 Data-driven EEG * 3.4911 Data-driven EEG * 3.4911 Data-driven EEG * 3.4912 Data-driven EEG * 3.4912 Data-driven EEG * 3.4913 Data-driven EEG * 3.4913 Data-driven EEG * 3.4914 Data-driven EEG * 3.4914 Data-driven EEG * 3.4915 Data-driven EEG * 3.4916 Data-driven EEG * 3.4916 Data-driven EEG * 3.4917 Data-driven EEG * 3.4918 Data-driven EEG * 3.4919 Data-driven EEG * 3.4919 Data-driven EEG * 3.4919 Data-driven EEG * 3.4910 Data-driven EEG * 3.4911 Data-driven EEG * 3.4911 Data-driven EEG * 3.4911 Data-driven EEG * 3.4911 Data-driven EEG * 3.4911 Data-driven EEG * 3.4912 Data-driven EEG * 3.4913 Data-driven EEG * 3.4913 Data-driven EEG * 3.4914 Data-driven EEG * 3.4915 Data-driven EEG * 3.4916 Data-driven EEG * 3.4917 Data-driven EEG * 3.4918 Data-driven EEG * 3.4919 Data-driven EEG ## 1 Introduction Understanding the brain is one of the most challenging problems of science, engineering and medicine. Currently, due to limitations of existing brain imaging and neurophysiological techniques, many neurophysiological variables cannot be measured directly (1,2). Moreover, if they can be measured directly, we cannot do so for all neurons in the brain let alone a small brain region. For instance, current techniques relevant to human applications, such as scalp or intracranial electroencephalography (EEG), record the activities of all the neurons of different population types with a heavy weighting towards cortical pyramidal neurons (3). However, EEG does not have the resolution to track single cell activity or connections between neurons or distinct populations. Brain modelling provides an alternative way to understand brain parameters and neural activity indirectly, and can be used to study healthy brain function as well as diseases like epilepsy, Alzheimer's disease or Parkinson's disease (4-8). One of the aims of modelling is to capture essential temporal and/or spatial characteristics of the brain or brain regions of interest, and to understand how they can emerge from underlying neurophysiological variables such as neural membrane potentials, synaptic strengths and time constants. At the level of EEG modelling, several approaches have been applied using NMMs to infer these crucial variables at the neural population level (9-15). Different methods analyse the EEG in either the time domain or the frequency domain. One of the most common inference frameworks is Dynamic Causal Modelling (DCM), which is usually applied in the frequency domain (14). Frequency-domain inference potentially offers greater stability in the estimates because they are determined over windows of data. On the other hand, time-domain inference of neurophysiological variables affords the possibility of capturing instantaneous transitions in these variables. This could be used to understand temporal variations in neural processing in the brain or enable timely control interventions (16) such as electrical stimulation to steer the brain away from an epileptic seizure state. These time-resolved variable estimates, in particular model parameters such as population averaged synaptic strengths, can also potentially provide information about the current brain state (e.g., seizure, non-seizure, pre-seizure, short seizure, long seizure). This is because inferred model parameters can be used to determine where the brain dynamics are currently positioned with respect to the model's bifurcation space. Moreover, such information could be used to make predictions, or to detect, different brain states. Inferring population averaged membrane potentials, synaptic strengths and time constants, either locally or across the brain from a limited number of EEG measurements, is a challenging task with observability issues at play (17). Depending on the spatial resolution of the population averaging, there are typically fewer EEG measurements than the number of variables being inferred. Although neural population models can generate data resembling EEG, there are no clear mathematical solutions for the model parameters given the observation data. Indeed, there may be multiple solutions within the state space given the observation data. That is to say, there are multiple mathematical solutions to the same observation and not all are correct. Even though some of the solutions are mathematically achievable, it does not mean they are biologically explainable. Developing a technique that can track neurophysiological processes that is reasonable both mathematically and biologically will open new vistas for tracking and imaging brain states, as well as controlling them. Regarding methods that have been used for time domain-based inference of neurophysiological processes given EEG data, the Kalman filter and its nonlinear variants have been applied (10,12,13,18). They address the problem of estimating the state of a discrete-time controlled process through stochastic difference equations (19). With the covariance matrix of the process and measurement noise, a Kalman filter balances the model and measurement noise to track the state vector of the model. However, it is a local search method that requires an initial guess of the expected state vector and its covariance. As a result, state vector estimates can deviate far from the true state vector if the initial guess is not close to the correct state vector. Furthermore, a solution to the Kalman gain and the covariance is needed. Since an arbitrary NMM can potentially be very high dimensional, the solution is not always easy to compute, and the initial guess is hard to determine because the neurophysiological variables cannot be measured in most cases. A solution that does not require an initial guess and is easy to apply given an arbitrarily complex model is needed. Long Short-Term Memory (LSTM) is a recurrent neural network that can "remember" older information in a time-series sequence (20). Therefore, it is a potential approach to solve the problem of inferring underlying neurophysiological processes (i.e., the state vector and parameters of NMMs) from EEG data (21,22). In this paper, we compare the LSTM approach with a nonlinear variant of Kalman Filtering. ## 2 Methodology ### Brain Modelling Scalp and intracranial EEG record electrical activity signals captured on the scalp and on the surface of the brain, respectively. These signals primarily represent the macroscopic activity of cerebral cortex. A single EEG signal is recorded by a single electrode, and is a spatially and temporally smoothed version of neuronally generated electrical potentials (23). Intracranial EEG is invasive and difficult to place enough electrophysiological recording electrodes in a given cortical location to understand how the underlying neurophysiological processes give rise to the EEG. As a result, modelling is a way to bridge the EEG data and the underlying neurophysiological processes. There are predominantly two approaches to build a mathematical model of the brain: the "bottom-up approach" and the "top-down approach" [24]. The bottom-up approach forms a macroscopic model of the brain or a brain region by coupling a network of microscopic models of individual neurons (e.g., Hodgkin-Huxley neurons). To obtain the spatial scale relevant to modelling EEG signals, the large number of neurons involved in this kind of model, as well as the heterogeneity and non-local connectivity of neurons, makes this largely impractical for understanding which key population-level neurophysiological variables influence macroscopic dynamics. On the other hand, the top-down approach defines dynamical models based on phenomenological observations and emergent behaviour. This means that the mathematical model is built to model large-scale signals, such as EEG, through the interactions of neural populations, where the neural populations only coarsely model the average properties of the underlying detailed neuronal networks. Although it would be useful to model and infer the detailed microscopic neurophysiological processes, this is difficult to do as a result of the aforementioned experimental and observability issues as well as computational tractability. As a result, we focus here on inferring population-level neurophysiological processes using top-down models. We focus on one of the most common NMMs used to model cerebral cortex, the Jansen-Rit model (9,25). The Jansen-Rit model was developed based on the model proposed by Lopes Da Silva et al. [26], who proposed a two-population neural field model to study spontaneous EEG generation and alpha frequency rhythms, and reduced the complexity of the model by aggregating the activities within a single cortical column to perform analyses. Jansen et al. [25] constructed a similar model of pyramidal neurons to show that the mechanisms that guide the evolution of evoked potentials and spontaneous EEG signal generation are the same. Jansen and Rit [9] extended the model with a third population of local excitatory interneurons that provide feedback to the pyramidal neuron population. The latter model has three populations: pyramidal neurons, inhibitory interneurons and excitatory interneurons. Figure 1 illustrates the model. The spatially averaged postsynaptic potential of population \(m\) arising as a result of input from pre-synaptic population \(n\) is expressed as \[v_{mn}(t)=\alpha_{mn}\int_{-\infty}^{t}h_{mn}(t-t^{\prime})\phi\big{(}v_{n}(t^ {\prime})\big{)}dt^{\prime}, \tag{1}\] where \(\alpha_{mn}\) is the population averaged synaptic connectivity strength. The post-synaptic response kernel is given by \[h_{mn}(t)=\eta(t)\frac{t}{\tau_{mn}}\exp\Big{(}-\frac{t}{\tau_{mn}}\Big{)}, \tag{2}\] Figure 1: Jansen & Rit neural mass model and time series recordings and parameter values. Structure of the model. The model consists of three populations: pyramidal neurons, inhibitory interneurons and excitatory interneurons. The output of each population is a firing rate, which is transformed to changes in the mean membrane potentials of connected neural populations. where \(\eta(t)\) is the Heaviside step function and \(\tau_{mn}\) is the population averaged synaptic time constant. Moreover, \(\phi(\nu)\) is a sigmoid function that transforms the membrane potential to a firing rate given by \[\phi(\nu)=\frac{1}{2}\Big{(}\text{erf}\left(\frac{\nu-\nu_{0}}{\varsigma} \right)+1\Big{)}, \tag{3}\] where \(\nu_{0}\) is a threshold and \(\varsigma\) is the slope of the sigmoid function, which is also the variance of firing thresholds of the population assuming the firing thresholds follow a Gaussian distribution. The mean membrane potential of population \(m\) is the sum of the synaptic contributions, \[\nu_{m}(t)=\sum_{n=1}^{N}\nu_{mn}(t). \tag{4}\] The convolution in equation [1] can be also written as two coupled, first-order, ordinary differential equations (27), \[\frac{dv_{mn}}{dt}=z_{mn} \tag{5}\] \[\frac{dz_{mn}}{dt}=\frac{dz_{mn}}{\tau_{mn}}\phi_{mn}-\frac{2}{ \tau_{mn}}z_{mn}-\frac{1}{\tau_{mn}^{2}}\nu_{mn}, \tag{6}\] where \(\tau_{mn}\) is a lumped time constant, and \(m\) and \(n\) indicate the pre-synaptic and post-synaptic neural population, respectively. Furthermore, we can express the neural mass model in matrix-vector notation so the operations can be expressed more compactly. The neural mass model can be expressed in matrix notation as \[\dot{x}(t) =Ax(t)+B\vec{\phi}\big{(}Cx(t)\big{)} \tag{7}\] \[y(t) =Hx(t)+\nu(t), \tag{8}\] where \(H\) is the observation matrix, \(\nu(t)\)\(\sim\)\(N(0,R)\) is the observation noise (10) and \(\nu(t)\) is the membrane potential of the pyramidal population, which is considered to be the contributor to the generation of the EEG signal in our model. The matrix \(A\) is defined by the membrane time constants, \[A = \text{diag}(\nu_{1}\...\ \nu_{n}), \tag{9}\] where \[\Psi_{n}=\begin{bmatrix}0&1\\ -\frac{1}{\tau^{2}mn}&-\frac{2}{\tau_{mn}}\end{bmatrix}. \tag{10}\] The matrix \(B\) is the synaptic gains from internal inputs, which has the form \[B=\text{diag}(0\ \text{a}_{1}\...\ 0\ \text{a}_{n}). \tag{11}\] The matrix \(C\) is the adjacency matrix that defines the connectivity structure of the model, \[C=\begin{bmatrix}0&0&...&0&0\\ c_{2,1}&0&&c_{2,N_{x}-1}&0\\ \vdots&\ddots&&\vdots\\ 0&0&&0&0\\ C_{N_{x},1}&0&...&C_{N_{x},N_{x}-1}&0\end{bmatrix}, \tag{12}\] which is a matrix of zeros and ones, where one indicates there is a connection between two populations and zero indicates there is no connection In order to apply a nonlinear Kalman filter to this model to infer both the state vector and the parameters, we define a vector of parameters as \(\theta=\big{[}u\ \alpha_{pe}\ \alpha_{pi}\ \alpha_{ip}\ \alpha_{ep}\big{]}^{T}\). This set of parameters corresponds to the input \(u\) to the model and the population averaged connection strengths between the three populations. Moreover, we assume the time constants of the model to be constant to simplify the estimation problem. The above parameters were combined with the state vector \(x\) to form the augmented state vector, \[\xi=[X^{T}\theta^{T}]^{T}. \tag{13}\] The augmented state-space model is then \[\xi_{t}=A_{\theta}\xi_{t-1}+B_{\theta}\phi(C_{\theta}\xi_{t-1})+W_{t-1}, \tag{14}\] where \(W_{t}\) is Gaussian noise. ### Analytic Kalman Filter Before defining the novel LSTM-based filter for state vector and parameter estimation, this section outlines the Kalman filter applied as a benchmark. In particular, a filter referred to as the Analytical Kalman filter (AKF) is applied. This filter is highly stable and accurate and was developed in prior work (2,10,27) that evolved from deriving the Kalman filter for general nonlinear NMMs using the specific sigmoidal non-linearity in equation (15). For the sake of brevity, we refer the reader to these prior works for greater mathematical insight. Moreover, links to code provided at the end of this paper give the exact specification of the implementation of the AKF. Here, a brief description of the main computations of the AKF are provided. The aim of the AKF is to find the most likely posterior distribution of the augmented state given the previous measurements under Gaussian assumptions. Such a posterior distribution is characterised by its _a posteriori_ state vector estimate and its covariance, \[\hat{\xi}_{t}^{*}=E[\xi_{t}\ ]\ y_{1},y_{2},\cdots,y_{t}] \tag{16}\] \[\hat{P}_{t}^{+}=E\left[\big{(}\xi_{t}-\xi_{t}^{+}\big{)}\big{(}\xi_{t}-\xi_{t}^{+} \big{)}^{\top}\right]. \tag{17}\] The AKF proceeds in two steps: prediction and update. During prediction, the prior distribution (obtained from the previous estimate) is propagated through the model equations. This step provides the so called _a priori_ estimate, which is also a Gaussian distribution with mean and covariance, \[\hat{\xi}_{t}^{-} =E[\xi_{t}\mid y_{1},y_{2},\cdots,y_{t-1}] \tag{18}\] \[\hat{P}_{t}^{-} =E\left[\big{(}\xi_{t}-\xi_{t}^{-}\big{)}\big{(}\xi_{t}-\xi_{t}^{ -}\big{)}^{\top}\right]. \tag{19}\] During update, the _a posteriori_ state vector estimate is determined by correcting the _a priori_ state vector estimate with recorded (EEG) data by \[\hat{\xi}_{t}^{+}=\hat{\xi}_{t}^{-}+K_{t}\big{(}y_{t}-H\hat{\xi}_{t}^{-}\big{)}, \tag{20}\] where \(K_{t}\) is the Kalman gain (18), \[K_{t}=\frac{\hat{P}_{t}^{-}\mu^{\top}}{\mu\hat{P}_{t}^{-}\mu^{\top}+R}. \tag{21}\] The _a posteriori_ state vector estimate covariance is then updated by using the Kalman gain, \[\hat{P}_{t}^{+}=(I-K_{t}H)\hat{P}_{t}^{-}. \tag{22}\] After each time step, the _a posteriori_ estimate becomes the prior distribution for the next time step, and the process repeats for each new recorded data point. The AKF requires the _a posteriori_ state vector estimate and its covariance to be initialised at time \(t=0\). The special features of the AKF are that the state vector estimate and its covariance, along with the Kalman gain, were derived using the fully non-linear Jansen-Rit model. While the AKF framework can be applied to multichannel EEG data in a multivariate fashion, for simplicity this paper focuses on estimating the state vector and parameters of the Jansen-Rit model using a single channel of EEG data (analogous to the output observation/measurement of the Jansen-Rit model) as input to the filter. ### A novel LSTM filter for state and parameter estimation The Recurrent Neural Network (RNN) was introduced mainly to deal with time series data such as speech recognition and natural language processing (28). An ordinary feed forward neural network is only processes independent data points. However, when data points depend on previous data point, in the case of time series data, an RNN incorporates the dependency by providing feedback to the next timestep. In this way, the final output depends not only by the input but also on the outputs of previous neurons. Long Short-Term Memory (LSTM) is a variation of the RNN that has improved its performance (20). Since gradient descent is applied to train neural networks, the gradient might explode or vanish after applying the sigmoid function over and over again, limiting to a certain number of discrete time steps to avoid this problem (29,30). However, the LSTM provides stability to bridge many more discrete time steps by enforcing constant error flow within special memory cells. LSTM neural networks incorporate memory cells and gate units to convey useful information about previous states of the neural network to the current state. An LSTM layer consists of multiple recurrently connected memory blocks and three multiplicative gates, known as input, output and forget gates, that learn to open and close access to the error flow within the memory cell (20,31). For example, an input gate could use inputs from other memory cells to decide whether it should store information in its own memory cell. An extension of the LSTM model is the bidirectional LSTM model that provide more accurate and improved results (32-34). One of the shortcomings of RNNs is that they can only learn from the context of the previous time steps and not anything after the current time step. Bidirectional LSTM can process data in both directions with separated hidden layers, which are then fed to the same output layer (35). Also, since the LSTM is free to access as much or as little of the data within the given time window as needed, predefining a specific time window for the model is not required (31). Here, a bidirectional LSTM filter is constructed that takes as input simulated or real EEG and predicts the state vector and parameters of the Jansen-Rit model as well as simulated or real EEG. Similar to the AKF, while the framework that follows could potentially be applied to multichannel EEG data in a multivariate fashion, this paper, for simplicity, focuses on using the LSTM filter to estimate the state vector and parameters of the Jansen-Rit model for a single channel of EEG data. The structure of the LSTM model is designed to achieve high performance while maintaining good time efficiency. Since the only non-linearity of the neural mass model is the firing rate function in equation (3), and to achieve high performance while predicting parameters given the observation, we would expect that at least two layers of artificial neural network are required. A grid search experiment was conducted to test the number of neurons in each layer. The selected structure was 128 and 32 neurons in layers 1 and 2, respectively. ### Training Data As real EEG data provides no ground truth about the neurophysiological variables that correspond to the state vector and parameters of the Jansen-Rit model, the LSTM filter is trained on simulation data was required. This allows the prediction of the state vector and parameters of the model when inputting either simulated or real EEG into the LSTM filter. It is important to generate a variety of training data with different patterns and a wide range of parameters so the model can learn the relationship between signal patterns and how the state and parameters change. Furthermore, it is also crucial to generate data containing oscillations, since there are likely to be many solutions for a constant signal, or fixed point. This is because when there is no oscillation, different combinations of changes in either the external input, input from the inhibitory or excitatory interneurons may result in the same observation signal. However, changes of the parameters could affect the ranges of external input that generate oscillations, which makes it difficult to find the desired range of external input when there can be different combinations of parameters. This is because the time constants affect the transformation from firing rate to post-synaptic membrane potential thus affecting the ranges over which the population produces oscillations. Thus, an effective method to generate oscillatory data with a wide range of parameters is required. Different combinations of inhibitory and excitatory time constants can generate data with different waveforms, frequencies and amplitudes. Normally, the ranges of both excitatory and inhibitory time constants are considered between 10-60 ms (36). In addition, external input also plays an important role in data generation. External input to the model is also affected by the time constant, such that the range of the external input that can produce oscillations is affected by different combinations of time constants, but the exact range for each combination is unknown. To determine the range of external input and whether the given combination of time constants is able to generate oscillation, an automatic data generation process is designed. Statistical hypothesis testing is used to determine whether an oscillation exists in a given time-series recording. Two tests are used. If the simulated EEG signal is noise-like, then it will have a Gaussian distribution so the Anderson-Darling test (37) is used to test if the data is not Gaussian distributed, which means there could be an oscillation. The other test is the Ljung-Box test (38), which tests whether any autocorrelations of the time series are different from zero. Either of the tests can help determine whether the generated data contains an oscillation and, since it is more important to ensure there are no false positives (data is Gaussian distributed or data has non-zero correlation but we reject this hypothesis) to avoid data being generated without including oscillations. We set the significance level to \(\alpha=10^{-4}\). A flow chart used to generate the training data is shown in Figure 2. We generate a short recording first to check whether this combination of time constants and external input is able to generate data containing oscillations. If not, then the external input is increased until it does. If it still does not produce oscillations for 15 consecutive increases of the input, the current combination of time constants is considered to not be able to produce the desired data and so the next combination is used. If the generated data includes oscillations then the upper and lower bounds of the external input are recorded, and training data is generated with the recorded parameters. The size of the dataset is as large as possible within the size of physical memory, which means that the gap between each combination is kept relatively small and as many examples as possible are provided for training (39). The generated data was shuffled randomly and divided into training, validation and testing data with ratio of 80:10:10. To improve the performance of the model with different amplitudes, the dataset is standardised. As in many cases with either real or simulated data, the raw amplitude of the EEG Figure 2: Flow chart of the generation of training and testing data. The same flow chart is used on different NMMs (the Jansen-Rit model is used in this paper). The amount of the increased external input is not fixed, as the upper bound could be high when the excitatory time constant is high. Thus, the amount is low when the input is low, and is higher when it is high. data may be on arbitrary scales. To avoid potential issues due to variations in amplitude scale or values out of the training range, The mean and standard error were computed for each feature and target variable, and standardisation is implemented for each of them using \(\frac{x-u}{\sigma}\). The other reason for doing this is to ensure the loss is reduced for all variables without any bias, as different amplitudes might cause the loss of some of the target variables to be weighted more than others. For example, if one variable has a higher amplitude than the other, its error is higher and thus will be reduced more. ### Loss Function The training of an artificial neural network aims to minimise the loss function over all training data. By default, the loss function is the mean squared error of all target variables, which are the state variables and parameters in our case. However, it is not enough for the model error to be minimised only on the training data, as we would like the LSTM model to learn whether the predicted state variables and parameters can represent the observations. More importantly, it should also learn to behave like the NMM. Given an observation signal, the trained LSTM model should be able to predict the state vector and parameters that can recreate the observation signal using the NMM. Since the generation of the training data is based on setting the discrete parameters, even though we can lower the step within each parameter, it is still possible that the model cannot learn the mathematical relationship between all state variables and parameters. Thus, we link the LSTM model with the NMM. To link the LSTM model with the NMM, we customise the loss function to allow the LSTM model to know the mathematical relationship between the observation and the state. Since the LSTM model is able to produce the state for the next timestep \(t\) given the state of the current time step \(t-1\), it is possible to generate all \(t\) states given the state at the current timestep. By comparing the state at time \(t\) with the state predicted by the LSTM model, we can know the error between the LSTM model prediction and the neural mass model prediction. If we can link the neural mass model to the loss function, the LSTM model can learn to minimise both the error of the LSTM predicted values and the neural mass model predicted values. The squared error at a single timestep is defined as \[\text{Squared Error}=\left[\left(\xi_{t}-\xi_{t}\right)^{2},\ \left(y_{t}-H\xi_{t}\right)^{2}\right], \tag{22}\] where \(\xi\) is the augmented state, \(y\) is the observation of the training data and \(\xi\) and \(H\xi_{t}\) are the augmented state and observation predicted by the LSTM model, respectively. The Squared Error term compares the predictions with the true values. By minimising the error, the predictions of both states and measurements will be closer to the truth. We then add to the loss function the model error, which is the error between the state of the LSTM model and the state generated by the neural mass model (40,41), \[\text{Model Error}=\left[\left(\xi_{t}-\left(A_{\theta}\xi_{t-1}+B_{\theta} \phi\left(C_{\theta}\xi_{t-1}\right)\right)\right)^{2},0\right] \tag{23}\] With the augmented state-space model, the estimated augmented state vector at the current time step \(\xi_{t-1}\) can be converted to the next time step. Minimising the squared error between \(\xi_{t}\) and the converted state vector \(A_{\theta}\xi_{t-1}+B_{\theta}\phi\left(C_{\theta}\xi_{t-1}\right)\) links the estimation to the neural mass model, since the training has to minimise the error between its own prediction and the state generated by the NMM. Note that, although \(B_{\theta}\), \(C_{\theta}\) and \(\phi\) are considered to be constant, the matrix \(A\) can vary depending on the time constants; i.e., the \(A\) has to change over time. In the customised loss function, \(A\) is calculated at each timestep so the change in state and parameters can be reflected over time. The last thing aspect of the loss function is the rate of change of the connectivity strength parameters, as they are considered to be slowly changing parameters compared to the membrane potentials and simulated EEG. It is possible to add the standard deviation of the parameters to the loss function to limit the amounts that they change. However, the parameters need to be adjusted rapidly when they differ substantially from the neural mass model. Thus, the standard deviation is combined with the model error as \[Std = s(\alpha)\left[\left(\xi_{t}-\left(A_{\theta}\xi_{t-1}+B_{ \theta}\phi\left(C_{\theta}\xi_{t-1}\right)\right)\right)^{2},0\right]*k \tag{24}\] \[s(\alpha)=[0,...0,\text{std}(\alpha).0]^{\top}, \tag{25}\] where \(\alpha\) is the vector of the four connectivity strengths parameters and \(k\) is an adjustable weight that is set to 0.1 as default. The final loss is the summation of equations [22], [23] and [24]. ### Testing on Simulated EEG Data On top of the testing data split from the training dataset, another testing dataset with the same range of parameters was generated, but the steps of \(\tau_{e}\) and \(\tau_{i}\) are set to be smaller. This dataset avoids using the exact parameters of the original training dataset. We tested both the AKF and the LSTM model on the same dataset. To evaluate the impact of incorrect initialisation of the AKF, two settings were used. The first setting was correctly initialised Kalman filters that were initialised to be exactly the same as the parameter initial values. The second setting was default parameters, where the excitatory and inhibitory time constants were 0.02 s and 0.01 s, respectively, which corresponds to the parameter values that generate a strong alpha-like (8-12 Hz) rhythm (9). To ensure all state vectors and parameters are on the same amplitude scale numerically, the testing results are standardised. We compute the Root Mean Squared Error (RMSE) between the truth and the prediction for each state variable and parameter, and find the difference between the AKF and the LSTM model result. The RMSE for each parameter is \[\text{RMSE}=\sqrt{\sum_{i=1}^{n}\left(\frac{x_{i}-\mu}{\sigma}-\frac{\hat{x}_{ i}-\mu}{\sigma}\right)^{2}/n}, \tag{26}\] where \(\mu\) and \(\sigma\) are the mean and standard deviation of the testing dataset for each parameter, respectively, and \(x_{i}\) and \(\hat{x}_{i}\) are the truth and prediction, respectively. Equation (26) is used for each estimation method to extract the raw RMSE and the values are compared across methods. Testing the robustness of the model is also of great importance, since typically there will be strong noise in real EEG data. As the standard deviation of the testing observation data is around 40, we add Gaussian noise with a mean of 0 and a standard deviation of 4, representing 10% Gaussian noise added to the testing data. Both the LSTM model and the perfectly initialised Kalman Filter are tested using the noisy testing data. We also test the LSTM model on randomly generated data involving time-dependent parameter changes. The time constants are randomly chosen from the range 0.01-0.06 s, and the model is simulated with the selected time contents held fixed for 5 s before the next combination of time constants. Note that there is a 5 s buffer between each 5 s fixed time constant simulation segment, where we connect the parameter values between segments with a straight line to simulate transitions between different brain states, where the parameters are not constant. We assume the parameters are slowly-changing, so a straight line between different stable states would be enough for simulation purposes. Since the training was done on constant parameters with noise only, testing on randomly generated data is needed to know whether the LSTM filter can work in a scenario where parameters are time varying. ### Testing on Real EEG Data Apart from simulation data, we also use intracranial EEG (iEEG) data collected from a patient with epilepsy chosen from data recorded continuously from 15 patients for up to three years (42). The patients were implanted with 16 intracranial electrodes around their seizure onset areas with sampling rate 400 Hz. The electrode array was wirelessly connected to a portable device. Seizures were automatically detected by the advisory device and were reviewed by clinical experts. We denote the iEEG data from this study as "real data". To demonstrate the practical utility of the LSTM filter, it is applied to a 1 h intracranial EEG recording containing an epileptic seizure to show whether or not we can observe a transition between normal and ictal brain states. Although the actual values of the state variables and parameters are not known, there is an obvious brain state change with seizure onset, so we can see if the LSTM model may be used to detect or even predict a seizure. Since normally iEEG data becomes more rhythmic when it enters a seizure compared to normal activity, we expect the parameter estimates to become stable after entering the seizure state. ## 3 Results We have chosen some typical time series to show the different parameter estimation results between the perfectly initialised AKF and the LSTM model. As shown in Figure 3, the first column is the default condition where excitatory and inhibitory time constants are 0.01 s and 0.02 s, respectively. The second and the third columns are examples where the LSTM performs better and where the AKF performs better, respectively. The simulated EEG signals can be fit closely for all scenarios. However, the fits of the parameters are very different. The LSTM model tends to converge rapidly in about 0.1 s, while the AKF takes more time to find the optimal solution. The LSTM model also appears to fluctuate more compared to the AKF, normally because the LSTM model has to compute using the provided input which is the observation signal. When the input has oscillations, the LSTM model can only minimise the changes, but it cannot make it constant like the truth. However, the results of the LSTM model are generally within a reasonable range with predictions close to the truth. The perfectly initialised AKF tracks the parameters very well, but the filter drifts far away from the correct solution for some cases, as shown in the second column. Figure 4 shows comparisons between the proposed LSTM method against the AKF. The two time constants were selected as the axes of the plots since the time constants have a fixed range (0.01 s to 0.06 s) compared to the external input, which has a more flexible range depending on the time constants. The median of the RMSE is computed over different external inputs, as sometimes the AKF results in errors when it cannot find the nearest positive definite covariance matrix which leads to extremely large RMSE. The covariance matrix must remain positive definite to ensure inversion is possible, so a nearest positive definite matrix has to be found when it is not. Due to the standardisation of the parameters in the RMSE calculation, most of the raw RMSE results are within the range 0-1. Hence, the minimum and the maximum of the colour bars are set to 0 and 1, respectively. The minimum and the maximum of the colour bars for the RMSE difference are set to -1 and 1, respectively. As seen in Figure 4 (a), the raw RMSE are mostly low for the LSTM model with a few exceptions. The performance is better in the centre of the parameter space compared to the edges. The centre of the space is close to the default parameter point (\(\tau_{e}=0.01,\tau_{i}=0.02\)) for the Jansen-Rit model. The regions of low performance (red area) are because very little or no oscillations are detected in the observation signal. The perfectly initialised AKF has a nearly perfect performance for lower values of the excitatory time constant (the upper parts of the plots), but the result is worse when the excitatory time constant increases. On the other hand, the fixed AKF initialised to the default parameter point only performs well around the region of the default setting. Figure 4 (b) show the differences between the LSTM filter and the two initialisation cases of the AKF. The red zone indicates that the LSTM model has a higher RMSE than the Figure 3: Examples of time series recordings and predictions using kalman filters and LSTM models. Different settings (columns) are given to the NMM to generate artificial EEG recordings. Observations and parameters are displayed to show the correct parameters (truth) and those estimated by the Kalman filters and the LSTM models. AKF. For the LSTM and the perfectly initialised AKF, despite the lower part of the graph where the LSTM model has a much lower RMSE compared to the AKF, the LSTM model usually has a slightly higher RMSE. Compared with the fixed Kalman Filter, the LSTM model usually shows much better performance. Thus, we can conclude that the proposed LSTM model can be considered as a generally better tool to estimate the EEG data without knowledge of the initial state variables and parameters. Figure 5 shows the performance with Gaussian noise added to all recordings with 0 mean and 10% of the standard deviation of the original data. We can see that the performance of the LSTM model did not change much, but the AKF did have more areas where the performance was worse compared Figure 4: Heatmaps of the LSTM and Kalman Filter errors. (a) Raw RMSE between the prediction and the truth. The test was run on the LSTM model (top), the perfectly initialised Kalman Filter (middle) and the fixed Kalman Filter (bottom) whose initialisation was set as default. The colours indicate the amount of error ranging from 0 (blue) to 1 (red). (b) The RMSE difference between the LSTM and the Kalman Filters. The difference was compared between the LSTM and the perfectly initialised Kalman Filter (top), and between the LSTM and the fixed Kalman Filter (bottom). The comparison was done by using the RMSE of the LSTM minus the RMSE of the Kalman Filter, which means a positive number indicates the LSTM model having a higher error. The colours indicate the RMSE difference ranging from -1 (blue) to 1 (red), with no difference shown as grey. to no noise. Furthermore, we can also observe from Figure 5 (b) that there are more blue areas compared to those in Figure 4 (b), showing that the LSTM model is more robust when the data is noisier. To show the performance of the LSTM filter in response to time-varying parameters, we also test using randomly generated recording with changing parameters as presented in Figure 6. The AKF is initialised with the initial parameters of the simulation data. Its parameter estimates are stable at the beginning of the recording when the true parameters are stable, but its parameter estimates diverge when the true parameters change. Although AKF can follow the observation at the beginning (before 20 s), it does not follow the parameter space correctly after this period. The LSTM model shows a much better fit to both the observation and the parameters. There are some times when the parameters are not followed precisely but, overall, the LSTM model tracks within a reasonable range. As a result, the figure shows that the LSTM model is able to track time-varying changes in the true parameter values that might be observed when the brain transitions between different dynamical states. The final test involves using real intracranial EEG data (42) containing an epileptic seizure to show whether the LSTM filter can detect brain state changes. In Figure 7, the seizure start and end times are labelled with vertical red bars, though they are difficult to differentiate in the plot of the full hour of data. In Figure 7 (a), which shows the estimation results for one hour of data, we can see all parameters fluctuate frequently and in a large range as well, which reflects the noisy EEG recording with no stable state to track. However, upon entering the seizure state, the changes in the parameters were limited as seen from Figure 7 (b), where the parameters are more stable during the seizure, and fluctuate more before and after the seizure. We can also see that from Figure 7 (b), where we zoom into the minute when the seizure was about to begin, the parameters transition between different values. This shows some promise that the LSTM model may be able to detect epileptic seizure transitions. ## 4 Discussion A novel LSTM filter has been presented to provide efficient neurophysiological variable estimates derived from EEG data, which does not require initialisation of the filter and can track dynamic changes in brain states. Moreover, the approach is sufficiently general that it can potentially be applied to other neural mass models. Although the overall performance of the LSTM model is better or at least similar to the perfectly initialised AKF, there are some noticeable drawbacks. In most cases, we can see that the performance of the estimated external input is usually the worst among all state variables and parameters, especially in the heatmaps shown in Figures 4 and 5. This is because there are few mathematical constraints regarding the external Figure 5: Heatmaps of the LSTM and the Kalman Filter after adding Gaussian noise. The same method was applied and the perfectly initialised Kalman Filter was tested. (a) Raw RMSE between the prediction and the truth. The test was run on the LSTM model (top) and the perfectly initialised Kalman Filter (bottom). The colours indicate the amount of error ranging from 0 (blue) to 1 (red). (b) The RMSE Difference between the LSTM and the Kalman Filter. The comparison was done by using the RMSE of the LSTM minus the RMSE of the Kalman Filter, which means a positive number indicates the LSTM model having a higher error. The colours indicate the RMSE difference ranging from -1 (blue) to 1 (red), with no difference shown as grey. inputs, as it acts as a way to adjust DC shift that is sometimes not achievable by the membrane potential linked to the pyramidal population. Other parameters that are more constrained mathematically by the relationship between the neural mass model equations and the EEG output of the model have better estimation performance. On the other hand, converting the firing rate of the external input to the membrane potential via the post-synaptic potential kernel depends on the excitatory time constant, so the external input has a dynamic range to vary instead of a fixed range like the time constant, which makes it harder to track. However, the situation improves when the data becomes noisier as the LSTM is still robust compared to the AKF. Time efficiency is also one of the important features of the artificial neural network. The AKF only considers one previous timestep, but the LSTM model considers 400 timesteps. This means the computation of the LSTM model is more complex. We have tested the result when both methods are computed via CPU, the AKF takes about 516 seconds to run on a one-hour recording, while the LSTM model takes about three times longer: about 1720 seconds. With the utilisation of GPU, the LSTM model is significantly faster, and is able to run on the same recording in 14 seconds. This suggests the LSTM filter can be scaled up for larger more complex neural mass models for more detailed inference based imaging tasks (2), while still providing tractable run times. Another advantage of the LSTM filter is that it can potentially be trained on a very specific region of the parameter space, in particular only within biologically realistic regions of the model parameter space. This means it will be much more likely to produce estimates in such a region also. The AKF on the other hand is dependent on numerical stability of the neural mass model associated with the sampling rate being used, and it also is not necessarily constrained to producing estimates outside of biologically plausible regions of the parameter space. Thus, in some cases the AKF can produce unrealistic estimates. In this paper, the training data generated by the neural mass model sometimes did not comply with biological realism. For example, the firing rate sometimes achieved rates as high as ten thousand spikes per second, which is not biologically realistic. This is because the purpose of the experiment was to show whether the LSTM filter is able to track the state vector and parameters regardless of the range of the parameters in general. Nevertheless, it would be straightforward to train the LSTM filter on completely biologically realistic regions of parameter space, in order to only produce biologically realistic estimates. Since there is only one input dimension, the observation in our training data, we would like to consider whether adding more features would benefit. We could extract the band power from one second prior to the current timestep to represent the change in the strength over different frequency bands. Since the frequency changes might be minor, especially in the low frequency area, we would like to compute a narrow bandwidth within low frequencies and wide bandwidth for the high frequencies (43). However, this feature extraction process takes a long time leading to time efficiency issues. The LSTM filter presented here is accessible through the link that is provided in the Code Availability section. The package has been prepared such that the LSTM filter can be applied in a univariate fashion to multichannel EEG, intracranial EEG or EEG/MEG source imaging data to derive single channel neurophysiological variable estimates across all channels. Future work will consider the multivariate multichannel estimate problem. Figure 6: Test on a randomly generated recording. (a) True parameters and the prediction mode by the LSTM and the AKF. The AKF is perfectly intiolised at the beginning, but starts to lose track as the change of parameters happens, while the LSTM keeps mostly on track. (b) observation signals. The LSTM model follows the truth much better than the AKF. ## 5 Conclusion The proposed methodology has demonstrated competitive accuracy, high time efficiency, and the potential to be applied to real world scenarios such as epileptic seizure detection. The result has shown the accuracy of the proposed approach is comparable with a perfectly initialised AKF, and is much better than the AKF that is initialised with default parameters. In a parameter-changing environment, the LSTM filter is able to track the changing parameters much better than the AKF even if the AKF is perfectly initialised. With GPU acceleration, the time efficiency is also greatly improved, with the time cost reduced by 96%. Finally, after testing on real data with an epileptic seizure, the LSTM filter can detect instantaneous transitions in brain states and, therefore, holds promise as a potential application for detecting or predicting seizures. Further research will be focused on applying the proposed method to different scenarios to infer, image and understand the neurophysiological processes underlying different kinds of electrophysiological and electromagnetic brain recordings. ## 6 Code Availability Figure 7: Test on real data with a seizure. (a) A one-hour recording containing an epileptic seizure. Seizure is indicated as a vertical red bar. (b) Zoomed in for the minute where the seizure happened. The vertical red bars indicate the start and the end of the seizure. The data generation, model training and random recording testing codes are available at [https://github.com/yueyang6/brain](https://github.com/yueyang6/brain) state reconstruction.
2306.08336
Global-Local Processing in Convolutional Neural Networks
Convolutional Neural Networks (CNNs) have achieved outstanding performance on image processing challenges. Actually, CNNs imitate the typically developed human brain structures at the micro-level (Artificial neurons). At the same time, they distance themselves from imitating natural visual perception in humans at the macro architectures (high-level cognition). Recently it has been investigated that CNNs are highly biased toward local features and fail to detect the global aspects of their input. Nevertheless, the literature offers limited clues on this problem. To this end, we propose a simple yet effective solution inspired by the unconscious behavior of the human pupil. We devise a simple module called Global Advantage Stream (GAS) to learn and capture the holistic features of input samples (i.e., the global features). Then, the GAS features were combined with a CNN network as a plug-and-play component called the Global/Local Processing (GLP) model. The experimental results confirm that this stream improves the accuracy with an insignificant additional computational/temporal load and makes the network more robust to adversarial attacks. Furthermore, investigating the interpretation of the model shows that it learns a more holistic representation similar to the perceptual system of healthy humans
Zahra Rezvani, Soroor Shekarizeh, Mohammad Sabokrou
2023-06-14T08:08:08Z
http://arxiv.org/abs/2306.08336v1
# Global-Local Processing in ###### Abstract Convolutional Neural Networks (CNNs) have achieved outstanding performance on image processing challenges. Actually, CNNs imitate the typically developed human brain structures at the micro-level (Artificial neurons). At the same time, they distance themselves from imitating natural visual perception in humans at the macro architectures (high-level cognition). Recently it has been investigated that CNNs are highly biased toward local features and fail to detect the global aspects of their input. Nevertheless, the literature offers limited clues on this problem. To this end, we propose a simple yet effective solution inspired by the unconscious behavior of the human pupil. We devise a simple module called Global Advantage Stream (GAS) (\(\mathcal{G}\)) to learn and capture the holistic features of input samples (i.e., the global features). Then, the GAS features were combined with a CNN network as a plug-and-play component called the Global/Local Processing (GLP) model. The experimental results confirm that this stream improves the accuracy with an insignificant additional computational/temporal load and makes the network more robust to adversarial attacks. Furthermore, investigating the interpretation of the model shows that it learns a more holistic representation similar to the perceptual system of healthy humans 1. Footnote 1: Source code is available here ## 1 Introduction Deep learning methods, as a cutting edge of artificial intelligence, are trained by filtering information through multiple hidden layers. The current DNNs can mimic the human brain at the micro-level (Neuronal level) but fails to deal with macro-level network behavior( Cognitive level). [Baker et al., 2018] shows that deep convolutional neural networks have more tendency to use texture information over the general shape. They have tried to train CNNs to categorize images using artificial images with misleading textures and suggest that texture plays a vital role in CNNs. They claimed that deep learning systems have no sensitivity to the overall shape of images and show that benchmark CNNs can not distinguish bounding contours of objects. While it is investigated that humans attend to the features of the general shape before considering the local features such as texture [Hermann et al., 2020]. The stimuli in the natural world have inherently hierarchical architecture: general form or global level and detail texture or local level. Global/Local processing (GLP) is one of the important early debates in psychology about the human perceptual system throughout the past four decades and is still an ongoing challenge. Global Precedence Effect (GPE), as a modern version of Gestalt theory, claims that individuals more readily, process global features faster than local details [Navon, 1977]. GPE has been investigated in a series of experiments with hierarchical compound stimuli consisting of a global letter/shape formed by the configuration of local letters/shapes (See Fig. 1), which are independent in local and global levels of the stimuli. This phenomenon is responsible for the ability to generalize in humans. So, it helps us perceive the forest before the trees and categorize them correctly, despite the differences in the details of the objects[Navon, 1977]. Pupil diameter is subconsciously controlled by the brain due to environmental stimuli. As the pupil shrinks, the reflection of images is focused on the fovea (located in the center of the Retina), which is made up mostly of Cones photoreceptor cells. These cells are responsible for receiving high-level features or details. Instead, the area around the Fovea is filled with Rodes cells. These cells are responsible for low-level features and have a low spatial acuity. As the pupil opens, the reflection of the environment is received by both cell types. The pupil is primarily regulated by prevailing light levels but is also modulated by perceptual and attentional factors. [Sabatino DiCriscio et al., 2018] found through psychophysical experiments with hierarchical stimuli that individuals have a characteristic constriction of the pupil waveform during the selection of local information relative to global information. They indicate that pupil changes may serve as a visual filtering mechanism important for attentional Figure 1: Examples of hierarchical compound stimuli, that are commonly used to evaluate the ability to detect at the general and partial levels separately in diagnostic applications. They were used to design the global advantage layer in experiment 1. selection. This work represented the first characterization of pupil response in the context of selective attention and suggested that mechanisms underlying the earliest stages of visual processes could be relevant for perception and visual selection. Also, it has been observed that children with ASD showed pupil constriction as a response to images of faces (Anderson et al., 2006). However, neurotypical children showed pupil dilation in response to the same stimuli (de Vries et al., 2021). There is other evidence that pupillometry reliably tracks inter-individual differences in perceptual style as a biomarker, and individuals with typically developed perception distribute attention to both surfaces in a more global, holistic style (Turi et al., 2018). Recently it has been investigated that Vision Transformer(ViT) has a better performance on modeling the holistic features of images(Dosovitskiy et al., 2020). They split the images into fixed-size patches, embedding them, and feeding them to a Transformer Encoder (TE) (Dosovitskiy et al., 2020).TE was inspired by vanilla transformers introduced for NLP tasks (Vaswani et al., 2017). (Aldahdoodi et al., 2021) assessed such models as more robust to adversarial examples. Nevertheless, the ViT models are computationally expensive. In this paper, Global Advantage Stream (GAS) was added to increase the accuracy and robustness of common CNNs. The purpose of this stream is to provide a holistic view of these networks, which not only increased their accuracy in categorizing images but also their resistance to common attacks dramatically improved. The novelty of this study is that, unlike previous state-of-the-art research, the design of GAS is completely inspired by the subconscious function of the human pupils. Also, The function of this stream is very similar to early therapeutic intervention methods using robots in educating autistic children to facilitate decoding of the overall features of the perceptual environment by removing details and helping them to return back to the right track base on GPE in normal individuals. The main contributions of this paper are: * The presented model, unlike CNNs, can consider global features in addition to local ones. The method inspired by the subconscious function of the human pupil extracts both sets of features simultaneously. Feature sets were concatenated to classify the images based on both global form and detail texture. The existence of global features in the feature bank empowers the model to follow the top-down attention strategy in addition to the bottom-up attention approach. * It has been shown that the proposed method is both more accurate and more robust than CNNs. However, the proposed model imposes an insignificant computational time load on the CNN model. Also, the proposed method has better interpretability rather than CNNs. It has been shown that the proposed model has a better performance. Also, because of its holistic view, it is more resistant to common adversarial attacks. Furthermore, better explainability according to the XAI method confirms better localization of the whole object in the images instead of focusing on a local detailed part. Method The main objective of this paper is, inspired by human behavior unconsciousness, forcing the deep neural network to learn both global and local representation. This makes the model more accurate and robust. The new model called the Global/Local Processing (GLP) model is composed of two main components (i.e., streams) that are concatenated in parallel: (1) The local stream (\(\mathcal{L}\)) and (2) The quick global stream (\(\mathcal{G}\)). \(\mathcal{L}\) is a conventional CNN that inherently learns the local features and complex local patterns. To competence the inability of such models to learn the holistic features, we introduce the \(\mathcal{G}\) stream. \(\mathcal{G}\) is composed of the GAS module, which is described below. In short, this module is made of a smart filter followed by two convolutional layers (feature maps). GAS is responsible for capturing global features inspiring the subconscious function of the human pupil. Fig. 2 shows the overall schema of the proposed method. The inside information of \(\mathcal{G}\) and \(\mathcal{L}\) components and training procedure is explained in the following subsections. ### GAS: Global Advantage Stream The goal of GAS is to extract global features. In fact, this stream is inspired by the subconscious function of the human pupil. During focus, the environment projects only on Fovea (populated exclusively by Cones with high spatial acuity), but in normal situations with dilated pupils, most of the ambient light is received Figure 2: Overview of the proposed method (GLP model). Global and local features are extracted through separated streams and then all the features concatenate to classify the images. by Rodes with low spatial acuity. Fig. 3 (a) displays the frequency distribution of these two cell types relative to the distance from the Fovea. In the GAS layer, firstly, a smart low-pass filter in the frequency domain has applied to the image input aiming to attenuate high-frequency noises. The most important point about the layer is the automatic cut-off parameter (\(\alpha\)) setting according to each input. To find the proper value for alpha smartly, we used the Entropy criterion. The amount of uncertainty in an entire probability distribution is quantified using the Shannon entropy. The entropy is calculated from the following Equ. 1.\(\mathcal{I}(\mathcal{X})\) is defined as self-information of an event \(\mathcal{X}=x\)[Goodfellow et al., 2016]. It is investigated that by increasing the radius of the Gaussian low-pass filter, the image entropy also will slightly increase. But after this phase, the routine continues in reverse. With increasing this parameter, the entropy changes in the opposite direction. According to this finding, as a next step, we find the value of \(\alpha\) that maximize the Entropy of the input (filtered image) based on Equ. 2. Interestingly, by smart filtering selectively with this value, all the local information has faded, and instead, the global structure of the image is more readily detected. The value of the optimum \(\alpha\) varies based on each image structure and size (see Fig. 3 (b)). In GAS, after removing the local details smartly, there are two feature map layers followed by two layers for pooling and batch normalization( see Fig. 2 ). \[H(x)=\mathbb{E}_{x\sim P}[\mathcal{I}(\mathcal{X})]=-\,\mathbb{E}_{x\sim P}[P (x)] \tag{1}\] \[\alpha^{*}=\arg\max_{\alpha}H(filteredImage(X,\alpha)) \tag{2}\] ### Global/Local processing Model The GLP Model is designed in such a way that global features are obtained from one stream, and local features are achieved from another stream. Afterward, they concatenated with each other as an ultimate feature bank. Finally, there is a fully connected layer to map the features to the output category. As shown in Fig. 2, the local stream consists of a pre-trained CNN. As we discussed before in the Introduction section 1, CNNs extract features based on local details in images. So, most of them could be considered as the local stream (\(\mathcal{L}\)) in our model. On the other hand, the global stream (\(\mathcal{G}\)) is made up of a GAS module. \(\mathcal{G}+\mathcal{L}\)**Training:** The training of this hybrid model is done in several steps as follows: 1. For \(\mathcal{L}\) layer weights, we use the pre-trained weights of common CNN models, and local features are calculated using them. 2. The GAS module is trained on training data to extract global features. 3. Then, the two feature sets are concatenated, and the feature bank has been completed. 4. Finally, the fully connected layer is trained so that it can perform the classification with the best accuracy. In the following section, we will show how GAS works to extract global features in the first experiment 3.1. Then we represent how this new stream improves the accuracy and robustness of the benchmark models in the next section in the second experiment 3.2. After that, we demonstrate that the interpretability of the model increases via an Explainable AI method (see Fig. 5). ## 3 Results To evaluate the proposed method, we have designed two experiments. Firstly, we investigated how GAS extracts global shape using the Navon dataset. Also, We showed that the common-used CNNs couldn't succeed in this simple task. Then, Figure 3: a) Constricted pupil reflexes all the ambient light in Fovea covered by Cones receptors with high acuity. But when the pupil dilates, image reflexes are received simultaneously by Rodes receptors with low acuity based on the cells’ distribution in the human eyes. Rodes cells are more responsible for peripheral vision [Wandell, 1995]. b) Examples of Smart filtering visualization. **First column:** input images, **Second column:** diagrams of changes in the amount of entropy by increasing the \(\alpha\), **Third column:** the optimum \(\alpha\) that maximizes the value of entropy in the filtered image, and **last column:** outputs of applying the proposed smart filtering using the optimum \(\alpha\). using an XAI algorithm presented how the GAS extracts exactly the global shape, but the others only focus on local details. We also evaluated the GLP model accuracy using the Caltech101 dataset [Fei-Fei et al., 2004]. Furthermore, we evaluated the robustness of our model against adversarial attacks. Moreover, to demonstrate the interpretability of our method, visualizations of feature maps were presented. We have exploited the Gradient-weighted Class Activation Mapping (Grad-CAM) [Selvaraju et al., 2017] to showcase the interpretability of the models in both experiments. Grad-CAM highlights the most important areas in the image by making the gradient of the classification score regarding the final convolutional layer. All experiments are implemented using PyTorch [Paszke et al., 2019] and performed using an NVIDIA GeForce RTX 2060 GPU. ### GAS module design and comparison In this experiment, we have compared the performance of the GAS model with two common-use CNN networks, Resnet18 [He et al., 2016] and InceptionV3 [Szegedy et al., 2015] to classify the Navon compound stimuli dataset [Navon, 1977] based on the global/local shape. We trained the GAS network and fine-tuned pre-trained Resnet18 and Inception-v3 models with simple augmented shape images (3000 images in two categories) to recognize the shape of circles and squares. Then, we evaluate all the networks with the Navon compound stimuli dataset based on the local/ global level. The initial learning rate for training the GAS network and fine-tuning the pre-trained models is equal to \(2e^{-3}\) with a decay rate. Also, we used stochastic gradient descent (SGD) optimizer with a momentum equal to 0.9 and a batch size of 64 for all the models.All the information about the computational and time load has been summarized in Appendix A. **Navon Dataset**[Navon, 1977] Navon is a set of compound stimuli with independent information at the local vs. global level (see 1). This experiment used geometrical shapes (circle vs. square) for both levels in different sizes, sparsity, and line width (4152 hierarchical images). The dataset has been used to test the ability to detect global/local shape detection. The Navon dataset and dataset of simple shape images have been included in the supplementary material. **Results**. The results of this experiment have been summarized in Table 1. This listed the top 1 model accuracy on the Navon dataset based on both Local / Global image shapes. As illustrated, the GAS outperforms two other CNNs in Global shape detection. While Resnet18 and InceptionV3 obtain better performance for local shape detection. **Visualization**. Fig. 4 represented the visualized feature maps in the global shape detection task. These tables confirm the power of GAS as a global feature extractor. This is while the CNN models failed to localize the global shape and only heat the local features. **Discussion**. In this experiment, we confirm that 1. The commonly used heavy CNNs behave weakly in global form detection instead more powerfully in local shape detection and detail texture extraction. 2. The GAS module, despite the higher speed (according to Appendix 5), is more powerful in extracting global features inspires by the subconscious function of the human pupil. 3. Visualizations confirm the global detection ability in GAS, unlike the other CNNs. In the next experiment, we improved the CNNs by combining GAS module as an extra parallel stream to them. Then evaluate the new hybrid model in accuracy and robustness. ### GLP model evaluation and explainability In the second experiment, we aimed to evaluate the GLP model's classification accuracy and its robustness in facing adversarial attacks compared to the mentioned CNNs on the Caltech101 dataset. For comparison, we train a GLP model (called GA-Resnet) using a pre-trained Resnet18 as the \(\mathcal{L}\) stream, and our pre-trained GAS model (GAS-299) on the Caltech101 dataset as the \(\mathcal{G}\) stream, also in the same way for comparing our GLP model (called GA-Inception) with the pre-trained InceptionV3. Similar to the previous experiment, we trained the GLP network and fine-tuned pre-trained Resnet18 and Inception-v3 models by the initial Learning rate \(2e^{-3}\) with a decay rate, the SGD optimizer, a batch size of 8, and with the standard input images size \(299\times 299\) for InceptionV3, GLP-299, and GA-Inception models, also \(224\times 224\) for Resnet18, GLP-224, and GA-Resnet models. **Caltech101**[Fei-Fei et al., 2004] is a well-known dataset for object classification which consists of \(\sim 9K\) images belonging to 101 classes (e.g., "starfish", "dolphin" and "umbrella" etc.) and a background clutter class that contains different objects from the 101 categories. To evaluate our approach, we did not use the background images and split the rest of the 101 classes of images into the train(60%), validation(20%), and test(20%). **Adversarial Attacks** For evaluating robustness we apply two common adversarial attacks called Fast Gradient Sign Method (FGSM) [Goodfellow et al., 2014] \begin{table} \begin{tabular}{l l l} \hline Model & Acc on Local & Acc on Global \\ \hline Resnet18 & **85.24** & 53.65 \\ \hline InceptionV3 & **75.6** & 62.15 \\ \hline Global Advantage Stream & 56.42 & **86.28** \\ \hline \end{tabular} \end{table} Table 1: Top1 Accuracy (%) of the models in global/local shape detection tasks on Navon dataset and Projected Gradient Descent (PGD) (Kurakin et al., 2018). To employ attacks, we used the CleverHans (Papernot et al., 2016) which is a Python library for adversarial attacks. Both FGSM and PGD are categorized in White box attacks which means that the attacker has access to the model's parameters. FGSM attack, which first is introduced by (Goodfellow et al., 2014) is a simple yet effective method by using the gradients of a CNN to generate adversarial images. As Equ. 3 is defined, for an input image \(x\), FGSM computes the loss of the model prediction regarding the actual class label, then calculates the gradients of the loss with respect to the input image, and uses the sign of the gradients to create the new adversarial image \(Adv(x)\) which maximizes the loss. For a given input image \(x\), the adversarial image is generated as follows: \[Adv(x)=x+\epsilon*sing(\bigtriangledown_{x}\mathcal{J}(\theta,x,y)) \tag{3}\] where \(y\) is the input actual label, \(\epsilon\) is used to ensure the perturbations are small enough to can not detect by human eyes but large enough to fool the CNN, \(\mathcal{J}\) is the model loss function, and \(\theta\) is the model parameters. Figure 4: Comparison of visualization on the Navon dataset global test images. The top row shows input images, and the rest of the rows depict the visualization results of GAS, InceptionV3, and Resnet18, respectively PGD attack generates new adversarial images in an iterative scheme. Following Equ. 4, PGD tries to maximize the loss of the CNN model on an input image \(x\) while finding a perturbation smaller than \(\epsilon\). Besides defining \(\epsilon\) as the maximum perturbation size, it is required to determine a metric to calculate the distance from the adversarial image \(Adv(x)\) to the input image \(x\). This metric ensures that the output adversarial example is not perceptibly different from humans. Among the various \(L_{p}\) norms \(p=2\) or \(p=\infty\) are the most common-use (Carlini and Wagner, 2017). The PGD attack is formulated as follows: \[Adv(x)_{i}=CLIP_{x,\epsilon}(x_{i-1}+\lambda sing\bigtriangledown_{x}(\mathcal{ J}(.)));Adv(x)_{0}=x_{original}, \tag{4}\] where \(i\) defined the number of iterations, \(CLIP\) is an operation that clips \(x\) back to the permissible set, \(\lambda\) is the step size and \(\mathcal{J}\) is the model loss function. To evaluate the robustness of the GLP model, we investigated the accuracy changes by increasing the perturbation size for the FGSM attack(\(\epsilon\)) and the maximum perturbation size(\(\epsilon\)) for the PGD attack. Following (Reddy et al., 2020), we reported the accuracy of our GLP model compared to the fine-tuned InceptionV3 and Resnet18 models on various \(\epsilon=[0,0.001,0.005,0.01,0.05,0.1,0.15,0.5]\) for both attacks. We calculated the accuracy as 1 - (naturally misclassified images + adversarial misclassified examples) since we run the adversarial attacks only on images that were not naturally misclassified. All the experiments are repeated for five iterations and the average accuracy was reported. For the PGD attack, we report \(L_{\infty}\) PGD results in our experiments. Also, we set the step size \(\lambda\) to \(\epsilon/3\) since it allows the PGD attack to reach the edge of the permissible set and explore the boundary as much as having a reasonable computation time. **Results**. For training the GLP model, first, the GAS module was trained. \begin{table} \begin{tabular}{l|c c c c c c c c} \hline & 0 & 0.001 & 0.005 & 0.01 & 0.05 & 0.1 & 0.15 & 0.5 \\ \hline Resnet18 & 86.41 & 74.72 & 32.89 & 17.56 & 7.55 & 7.89 & 9.45 & 10.02 \\ \hline GAS-Resnet18 & **91.24** & **86.52** & **61.12** & **41.30** & **25.27** & **24.54** & **24.77** & **15.44** \\ \hline InceptionV3 & 90.50 & 88.13 & 72.24 & 61.06 & 42.86 & 41.61 & 42.74 & 36.22 \\ \hline GAS-InceptionV3 & **92.40** & **91.76** & **83.12** & **71.37** & **53.51** & **48.47** & **47.01** & **40.67** \\ \hline \end{tabular} \end{table} Table 2: Top1 Accuracy (%) on FGSM attack for different \(\epsilon\) \begin{table} \begin{tabular}{l|c c c c c c c c} \hline & 0 & 0.001 & 0.005 & 0.01 & 0.05 & 0.1 & 0.15 & 0.5 \\ \hline Resnet18 & 86.41 & 74.72 & 32.89 & 17.56 & 7.55 & 7.89 & 9.45 & 10.02 \\ \hline GAS-Resnet18 & **91.24** & **86.52** & **61.12** & **41.30** & **25.27** & **24.54** & **24.77** & **15.44** \\ \hline InceptionV3 & 90.50 & 88.13 & 72.24 & 61.06 & 42.86 & 41.61 & 42.74 & 36.22 \\ \hline GAS-InceptionV3 & **92.40** & **91.76** & **83.12** & **71.37** & **53.51** & **48.47** & **47.01** & **40.67** \\ \hline \end{tabular} \end{table} Table 3: Top1 Accuracy (%) on PGD attack for different \(\epsilon\) The extracted global features were concatenated with local features extracted by pre-trained CNN to classify the images. The results in Table 4 show that the GAS model can consistently improve the performance of the CNN methods significantly. In other words, the results validate the importance of global features in the object classification task. Also, the the top1 accuracy results in Table 2 and Table 3 indicate that the GLP model is more robust than the CNNs in against both FGSM and PGD attacks overall. Also, the diagrams of the top5 and the top10 accuracy comparison are shown in Appendix B for further comparisons. **Visualization** In order to visualize the GLP model feature maps with Grad-CAM, since this method requires a convolution layer for extracting feature maps, We provide an extra convolution layer after the GLP network and right before the concatenation operation of GLP and pre-trained CNN networks. This extra layer equalizes the size of the last convolution layer of GLP with the last convolution layer of the pre-trained network. Thus, after training this new architecture, we are capable of visualizing the feature maps of our GLP model using the Grad-CAM method. Fig. 5 depicted the comparison of Grad-CAM visualization between the CNNs and the modified version with the GAS module. The remarkable localizing objects' boundaries of the GAS network, together with the power of CNN networks, provide a more precise object localization for two-stream networks compared to using a single CNN network. **Discussion** In this experiment, we modified commonly used CNNs with a GAS module to improve them with an extra quick holistic view. According to the results, this module not only improved the accuracy of the models but also, make them more resistant to attacks. Actually, this module inspired by the subconscious function of the human pupil as an early function in perception helped these CNNs as local processors to become more holistic in the same way as humans and made the models more accurate, more robust, and more explainable. ## 4 Conclusion The main goal of this paper is to develop CNNs with an extra quick holistic view. To this end, first, we introduce a new module called GAS to extract global features. The main idea behind this module is a smart filtering layer. This layer, inspired by the subconscious function of the human pupil, fades the noises using the low-pass filter. The parameter should choose smartly so that the total entropy of the whole filtered image is at its maximum. This new hybrid model (GLP model) has both sets of local and global features to detect images correctly. \begin{table} \begin{tabular}{l l l l} \hline \hline Resnet18 & GA-Resnet & InceptionV3 & GA-Inception \\ \hline 86.41 & **91.24** & 90.50 & **92.40** \\ \hline \hline \end{tabular} \end{table} Table 4: Top1 Accuracy (%) on Caltech101 dataset The new model not only has better performance in image classification but also the robustness of the model has increased. Also, it has been shown that the explainability of the model is improved. GPE, as a strong hypothesis in cognitive psychology, is not limited to the visual modality. It holds in all modalities like auditory or in language processing. So, as the feature works, it has been suggested to improve the models by adding a holistic view of other modalities and applications. It will be hoped that these improvements not only increase the performance and robustness of the models but also helps deep learning methods approach their main essence by imitating human at a higher level of cognition. ## 5 Compliance with Ethical Standards We hereby declare that there are no conflicts of interest with respect to the research presented in this paper. Furthermore, we affirm that we have not received any external funding or financial support for this research. It is important to note that this article does not include any studies involving human participants conducted by any of the authors. Figure 5: Comparison of visualization with Grad-Cam. In each column, the visualization results and the predicted label of the corresponding model are defined for three input images.
2310.05692
Based on What We Can Control Artificial Neural Networks
How can the stability and efficiency of Artificial Neural Networks (ANNs) be ensured through a systematic analysis method? This paper seeks to address that query. While numerous factors can influence the learning process of ANNs, utilizing knowledge from control systems allows us to analyze its system function and simulate system responses. Although the complexity of most ANNs is extremely high, we still can analyze each factor (e.g., optimiser, hyperparameters) by simulating their system response. This new method also can potentially benefit the development of new optimiser and learning system, especially when discerning which components adversely affect ANNs. Controlling ANNs can benefit from the design of optimiser and learning system, as (1) all optimisers act as controllers, (2) all learning systems operate as control systems with inputs and outputs, and (3) the optimiser should match the learning system. Please find codes: \url{https://github.com/RandomUserName2023/Control-ANNs}.
Cheng Kang, Xujing Yao
2023-10-09T13:09:38Z
http://arxiv.org/abs/2310.05692v1
# Based on What We Can Control Artificial Neural Networks ###### Abstract How can the stability and efficiency of Artificial Neural Networks (ANNs) be ensured through a systematic analysis method? This paper seeks to address that query. While numerous factors can influence the learning process of ANNs, utilizing knowledge from control systems allows us to analyze its system function and simulate system responses. Although the complexity of most ANNs is extremely high, we still can analyze each factor (e.g., optimiser, hyperparameters) by simulating their system response. This new method also can potentially benefit the development of new optimiser and learning system, especially when discerning which components adversely affect ANNs. Controlling ANNs can benefit from the design of optimiser and learning system, as (1) all optimisers act as controllers, (2) all learning systems operate as control systems with inputs and outputs, and (3) the optimiser should match the learning system. Please find codes: [https://github.com/RandomUserName2023/Control-ANNs](https://github.com/RandomUserName2023/Control-ANNs). Optimizer Controller Learning System Control System Fuzzy Logic Filter ## 1 Introduction Controlling artificial neural networks (ANNs) has become an urgent issue on such a dramatically growing domain. Although ANN models, such as, vision models (e.g., CNN [21], VGG19 [33], ResNet50 [11], EfficientNet [34], ViT [7]), language models (e.g., BERT [6], GPT [29], PaLM [4]), and generative models (e.g., GAN [9], VAE [18], Stable Diffusion Models [15, 31]), all require input and output, as they aim to map the gap between their output and the desired output. However, basically, CNN-based vision models prefer SGDM [28] optimiser, and generative models tend to rely on AdaM optimiser. Using various architecture on CNN-based vision models (e.g., from VGG19 to ResNet50, from GAN to CycleGAN [40], and from CNN to FFNN [13]) yield significantly varied results for classification and generation tasks. Two critical questions arise: **(1)** why some of them satisfy the corresponding optimiser, **(2)** based on what to propose an advanced ANN architecture and a proper optimiser. Compared to existing era-acrossing optimisers, such as SGD [30, 5, 38], SGDM [28, 25], AdaM [17, 3], PID [36], and Gaussian LPF-SGD [2], we proposed a FuzzyPID optimiser modified by fuzzy logic to avoid vibration during PID optimiser learning process. Referring to Gaussian LPF-SGD (GLFP-SGD), we also proposed two filter processed SGD methods according to the low and high frequency part during the SGD optimiser learning process: low-pass-filter SGD (LPF-SGD) and high-pass-filter SGD (HPF-SGD). To achieve stable and convergent performance, we simulate these above optimisers on the system response to analyze their attributes. When using simple and straightforward architecture (without high techniques, such as, BN [16], ReLU [26], pooling [37], and exponential or cosine decay [24]), we found their one step system response are always consistent with their training process. Therefore, we conclude that every optimiser actually can be considered as a controller that optimise the training process. Results using HPF-SGD indicate that the high frequency part using SGD optimiser significantly benefits the learning process and the classification performance. To analyze the learning progress of most ANNs, for example, CNN using backpropagation algorithm, FFNN using forward-forward algorithm, and GAN such a generative model using random noise to generate samples. We assume above three mentioned models here essentially can be represented by corresponding control systems. But the difficulty is that when using different optimisers, especially, AdaM, we cannot analyze its stability and convergence, as the complexity is extremely high. Thus, we use MATLAB Simulink to analyze their system response, as well as their generating response. Experiment results indicate that advanced architectures and designs of these three ANNs can improve the learning, such as residual connections (RSs) on ResNets, a higher Threshold on FFNN, and a cycle loss function on CycleGAN. Based on the knowledge of control systems [27], designing proper optimisers (or controllers) and advanced learning systems can benefit the learning process and complete relevant tasks (e.g., classification and generation). In this paper, we design two advanced optimisers and analyze three learning systems relying on the control system knowledge. The contributions are as follows: **Optimisers are controllers. (1)** PID and SGDM (PI controller) optimiser performs more stable than SGD (P controller), SGDM (PI controller), AdaM and fuzzyPID optimisers on most residual connection used CNN models. **(2)** HPF-SGD outperforms SGD and LPF-SGD, which indicates that high frequency part is significant during SGD learning process. **(3)** AdaM is an adaptive filter that combines an adaptive filter and an accumulation adaptive part. **Learning systems of most ANNs are control systems. (1)** Most ANNs present perfect consistent performance with their system response. **(2)** We can use proper optimisers to control and improve the learning process of most ANNs. **The Optimiser should match the learning system. (1)** RSs based vision models prefer SGDM, PID and fuzzyPID optimisers. **(2)** RS mechanism is similar to AdaM. particularly, SGDM optimizes the weight of models on the time dimension, and RS optimizes the model on the space dimension. **(3)** AdaM significantly benefits FFNN and GAN, but PID and FuzzyPID does CycleGAN most. ## 2 Problem Statement and Preliminaries To make ANNs more effective and adaptive to specific tasks, controlling ANNs has become necessary. We initialize a parameter of a node in the ANN model as a scalar \(\theta_{0}\). After enough time of updates, the optimal value of \(\theta^{*}\) can be obtained. We simplify the parameter update in ANN optimisation as a one-step response (from \(\theta_{0}\) to \(\theta^{*}\) ) in the control system. The Laplace transform of \(\theta^{*}\) is \(\theta^{*}/s\). We denote the weight \(\theta(t)\) at iteration \(t\). The Laplace transform of \(\theta(t)\) is denoted as \(\theta(s)\), and that of error \(e(t)=\theta^{*}-\theta(t)\) as \(E(s)\): \[E(s)=\frac{\theta^{*}}{s}-\theta(s) \tag{1}\] Considering the collaboration of backward and forward algorithms, the Laplace transform of the training process is \[U(s)=(\textit{Controller1}+\textit{Controller2}\cdot F(s))\cdot E(s) \tag{2}\] \(F(s)\) is the forward system which has the capability to affect \(U(s)\) beforehand. In our case, \(u(t)\) corresponds to the update of \(\theta(t)\). _Controller1_ is the parameter update algorithm for the backward process, and _Controller2_ is the parameter update algorithm for the forward process. Therefore, we replace \(U(s)\) with \(\theta(s)\) and \(E(s)\) with \((\theta^{*}/s)-\theta(s)\). Equation 2 can be rewritten as \[\theta(s)=(\textit{Controller1}+\textit{Controller2}\cdot F(s))\cdot\left( \frac{\theta^{*}}{s}-\theta(s)\right) \tag{3}\] Finally, we simplify the formula of training a model as: \[\theta(s)=\frac{\textit{Controller}}{\textit{Controller}+1}\cdot\frac{ \theta^{*}}{s} \tag{4}\] where \(\textit{Controller}=\textit{Controller1}+\textit{Controller2}\cdot F(s)\). \(\theta^{*}\) denotes the optimal model which we should get at the end. Simplifying \(\theta(s)\) further as below: \[\theta(s)=\textbf{Controller}(\mathbf{s})\cdot\mathbf{C}(\mathbf{s}) \tag{5}\] Figure 1: The schematic structure of training ANN models. C(s) is the controller to train the target ANN model. where \(\mathbf{Controller}(\mathbf{s})=\mathit{Controller}/(\mathit{Controller}+1)\), and \(\mathbf{C}(\mathbf{s})=\theta^{*}/s\). Based on above analytic thought, as shown in Figure 1 there are two ways to obtain an optimal \(\theta(s)\) and to make the training process better: **(1)** using a better **Controller** and **(2)** constructing a better training or control system \(\mathbf{C}(\mathbf{s})\). ## 3 Optimisers are Controllers In this section, we review several widely used optimisers, such as SGD [30; 5; 38], SGDM [28; 25], AdaM [17; 3], PID-optimiser [36] and Gaussian LPF-SGD [2]. In the training process of most ANNs, there are diverse architectures used to satisfy various tasks. We analyze the performance of optimisers in terms of one node of backpropagation based ANN models. Please see the proof in Appendix A. ### AdaM Optimiser AdaM [17] has been used to optimise the learning process of most ANNs, such as GAN, VAE, Transformer-based models, and their variants. We simplify the learning system of using AdaM on ANNs as below: \[\theta(s)=\frac{K_{p}s+K_{i}}{Ms^{2}+(K_{p}-Mln\beta_{1})s+K_{i}}\cdot\frac{ \theta^{*}}{s} \tag{6}\] where \(M\) is an adaption factor which will dynamically adjust the learning during the training process, and it can be derived from: \[M=\frac{1}{\sqrt{\frac{\sum_{i=0}^{t}\beta_{2}^{t-i}(\partial L_{t}/\partial \theta_{i})^{2}}{\sum_{i=0}^{t}\beta_{2}^{t-1}}}+\epsilon}\cdot\frac{1}{\sum_ {i=0}^{t}\beta_{1}^{t-1}} \tag{7}\] Apart from the adaption part \(M\), AdaM can be thought as the combination of SGDM and an adaptive filter with the cutoff frequency \(\omega_{c}=ln(\beta_{1})\). ### Filter Processed SGD optimiser SGD learning process can be filtered under carefully designed filters. GLPF-SGD [2] used a low pass Gaussian-filter to smooth the training process, as well as actively searching the flat regions in the Deep Learning (DL) optimisation landscape. Eventually, we simplify the learning system of using SGD with filters on ANNs as below: \[\theta(s)=\frac{Gain\cdot\prod_{i=0}^{m}{(s+h_{i})}}{Gain\cdot\prod_{i=0}^{m }{(s+h_{i})+\prod_{j=0}^{n}{(s+l_{j})}}}\cdot\frac{\theta^{*}}{s} \tag{8}\] where designed \(Filter\) have the order, such as \(n\) for the low pass and \(m\) for the high pass (\(h_{i}\) is the coefficient of the high pass part and \(l_{i}\) is the coefficient of the low pass part), and \(Gain\) is the gain factor: \[Filter=Gain\cdot\frac{(s+h_{0})(s+h_{1})...(s+h_{m})}{(s+l_{0})(s+l_{1})...(s+l _{n})} \tag{9}\] ### PID and FuzzyPID optimiser Based on PID optimiser [36], we design a PID controller which is optimised by fuzzy logic to make the training process more stable while keeping the dominant attribute of models. For instance, the ability to resist the disturbance of the poisoned samples, the quick convergent speed and the competitive performance. There are two key factors which affect the performance of the Fuzzy PID optimiser: (1) the selection of Fuzzy Universe Range \([-\varphi,\varphi]\) and (2) Membership Function Type \(f_{m}\). \[\widehat{K}_{\mathrm{P,I,D}}=K_{\mathrm{P,I,D}}+\Delta K_{\mathrm{P,I,D}} \tag{10}\] \[\Delta K_{\mathrm{P,I,D}}=Defuzzy(E(s),Ec(s))\cdot K_{\mathrm{P,I,D}} \tag{11}\] \(Defuzzy(s)=f_{m}(round(-\varphi,\varphi,s))\) where \(\Delta K_{\rm P,I,D}\) refer to the default gain coefficients of \(K_{\rm P}\), \(K_{\rm I}\) and \(K_{\rm D}\) before modification. \(E(s)\) is the back error, and \(Ec(s)\) is the difference product between the \(Laplace\) of \(e(t)\) and \(e(t-1)\). The Laplace function of this model \(\theta(s)\) eventually becomes: \[\theta(s)=\frac{\widehat{K}_{d}s^{2}+\widehat{K}_{p}s+\widehat{K}_{i}}{ \widehat{K}_{d}s^{2}+(\widehat{K}_{p}+1)s+\widehat{K}_{i}}\cdot\frac{\theta^{ *}}{s} \tag{12}\] where \(\widehat{K}_{p}\), \(\widehat{K}_{i}\) and \(\widehat{K}_{d}\) should be processed under the fuzzy logic. By carefully selecting the learning rate \(r\), \(\theta(s)\) becomes a stable system. The PID [1] and Fuzzy PID [35] controllers have been used to control a feedback system by exploiting the present, past, and future information of prediction error. The advantages of a fuzzy PID controller includes that it can provide different response levels to non-linear variations in a system. At the same time, the fuzzy PID controller can function as well as a standard PID controller in a system where variation is predictable. ## 4 Control Systems of ANNs In this section, to systematically analyze the learning process of ANNs, we introduce three main common-used control systems that we believe can be respectively connected to backpropagation based CNNs, forward-forward algorithm based FFNNs, and GANs: **(1)** backward control system, **(2)** forward control system using different hyperparameters, and **(3)** backward-forward control system on different optimisers and hyperparameters. Please see the proof in Appendix B. ### Backward Control System Traditional CNNs use the backpropagation algorithm to update initialized weights, and based on errors or minibatched errors between real labels and predicted results, optimisers are used to control on how the weight should be updated. According to the deduction of PID optimiser [36], the training process of Deep Neural Networks (DNNs) can be conducted under a step response of control systems. However, most common-used optimisers have their limitations, such as **(1)** SGD costs a very long term to reach convergence, **(2)** SGDM also has the side effect of long term convergence even with the momentum accelerating the training, **(3)** AdaM presents a frequent vibration during the training because of the merging of momentum and root mean squared propagation (RMSprop), **(4)** PID optimiser has better stability and convergence speed, but the training process is still vibrating. This proposed fuzzyPID optimiser can keep the learning process more stable, because it can be weighted towards types of responses, which seems like an adaptive gain setting on a standard PID optimiser. Finally, we get the system function \(\theta(s)\) of ANNs by using FuzzyPID optimisers as an example below: \[\theta(s)=\frac{\textit{FuzzyPID}}{\textit{FuzzyPID}+1}\cdot\frac{\theta^{ *}}{s} \tag{13}\] ### Forward-Forward Control System The using of forward-forward computing algorithm was systematically analyzed in forward-forward neural network [13] which aims to track features and figure out how ANNs can extract them from the training data. The Forward-Forward algorithm is a greedy multilayer learning procedure inspired by Boltzmann machines [14] and noisy contrastive estimation [10]. To replace the forward-backward passes of backpropagation with two forward passes that operate on each other in exactly the same way, but on different data with opposite goals. In this system, the positive pass operates on the real data and adjusts the weights to increase the goodness in each hidden layer; the negative pass operates on the negative data and adjusts the weights to reduce the goodness in each hidden layer. According to the training process of FFNN, we get its system function \(\theta(s)\) as below: \[\theta(s)=\left\{\left(-(1-\lambda)\frac{\theta^{*}}{s}+\lambda\frac{\theta^{ *}}{s}-\left[\theta(s)-\frac{Th}{s}\right]\right)\right\}\cdot Controller \tag{14}\] where \(\lambda\in[0,1]\) is the portion of positive samples, and \(Th\) is the given Threshold according to the design [13]. Input should contain negative and positive samples, and by adjusting the Threshold \(Th\), the embedding space can be optimised. In each layer, weights should be updated on only corresponding errors that can be computed by subtracting the Threshold \(Th\). We finally simplify \(\theta(s)\) as: \[\theta(s)=\frac{1}{Controller+1}\cdot\left(\frac{(2\lambda-1)\theta^{*}+Th}{s}\right) \tag{15}\] Because \((2\lambda-1)\theta^{*}+Th\geq 0\), the system of FFNN is stable. Additionally, when \(\lambda=0.5\) and \(Th=1.0\), the learning system of FFNN (the second half part of Equation 15) will become to that of backpropagation based CNN, as we assume \(\theta^{*}\approx 1.0\). When \(\lambda=0.5\), the optimal result \(\theta^{*}\) has no relationship with the learning system. ### Backward-Forward Control System GAN is designed to generate samples from the Gaussian noise. The performance of the GAN depends on its architecture [39]. The generative network uses random inputs to generate samples, and the discriminative network aims to classify whether the generated sample can be classified [9]. We get its \(\theta(s)\) as below: \[\theta_{D}(s)=controller\cdot\theta_{G}(s)\cdot E(s) \tag{16}\] \[\theta_{G}(s)=controller\cdot E(s) \tag{17}\] \[E(s)=\frac{\theta_{D}^{*}}{s}-\theta_{D}(s) \tag{18}\] where \(\theta_{D}(s)\) is the desired Discriminator, \(\theta_{G}(s)\) is the desired Generator. \(E(s)\) is the feed-back error. \(\theta_{G}^{*}\) is the optimal solution of the generator, and \(\theta_{D}^{*}\) is the optimal solution of the discriminator. Eventually, we simplify \(\theta_{G}(s)\) and \(\theta_{D}(s)\) as below: \[\theta_{G}(s)=\frac{1}{2}\cdot\left(\frac{\theta_{D}^{*}}{Controller}\pm \sqrt{(\frac{\theta_{D}^{*}}{Controller})^{2}-\frac{4}{s}}\right) \tag{19}\] \[\theta_{D}(s)=\theta_{G}^{2}(s) \tag{20}\] where if set \(\theta_{G}(s)=0\), we get one pole point \(s=0\). When using SGD as the \(controller\), \(\theta_{G}(s)\) is a marginally stable system. ## 5 Experiments ### Simulation As we believe that the training process of most ANNs can be modeled as the source response of control systems, we use Simulink (MATLAB R2022a) to simulate their response to different sources. For the classification task, because all models aim to classify different categories, we set a step source as illustrated in [36]. For the sample generation task, to get a clear generating result, we use a sinusoidal source. ### Experiment Settings We train our models on the MNIST [23], CIFAR10 [20], CIFAR100 [20] and TinyImageNet [22] datasets. For an apple-to-apple comparison, our training strategy is mostly adopted from PID optimiser [36] and FFNN [13]. To optimise the learning process, we **(1)** firstly use seven optimisers for the classification task on backpropagation algorithm based ANNs. **(2)** Secondly, we choose some important hyperparameters and simulate the learning process of FFNN. **(3)** Lastly, to improve the stability and convergence during the training of GAN, we analyze its system response on various optimisers. All models are trained on single Tesla V100 GPU. All the hyper-parameters are presented in Table 3 of Appendix E. #### 5.2.1 Backward Control System We design one neural network using backpropagation algorithm with \(2\) hidden layers, setting the learning rate \(r\) at \(0.02\) and the fuzzy universe range \(\varphi\) at \([-0.02,0.02]\). We initialize \(K_{P}\) as \(1\), \(K_{I}\) as \(5\), and \(K_{D}\) as \(100\). Thus, we compare seven different optimisers: SGD (P controller), SGDM (PI controller), AdaM (PI controller with an Adaptive Filter), PID (PID controller), LPF-SGD, HPF-SGD and FuzzyPID (fuzzy PID controller) on the above ANN model. We set Gaussian membership function as the default membership function. See filter coefficients in Table 4 of Appendix E. In Table 5 of Appendix E, there is a set of hyperparameters that we have used to train CIFAR10, CIFAR100 and TinyImageNet. #### 5.2.2 Forward-Forward Control System Following the forward-forward algorithm [13], we design one forward-forward neural network (FFNN) with \(4\) hidden layers each containing \(2000\) ReLUs and full connectivity between layers, by simultaneously feeding positive and negative samples into the model to teach it to distinguish the handwriting number (MNIST). We also carefully select the proportion of positive and negative samples. The length of every block is \(60\). #### 5.2.3 Backward-Forward Control System To demonstrate the relationship between the control system and the learning process of some complex ANNs, we choose the classical GAN [9]. Both the generator and the discriminator comprise \(4\) hidden layers. To verify the influence of different optimisers on GAN, we employ SGD, SGDM, AdaM, PID, LPF-SGD, HPF-SGD and fuzzyPID to generate the handwriting number (MNIST). We set the learning rate at \(0.0002\) and the total number of epochs at \(200\). ## 6 Results and Analysis In this section, we present simulation performance, classification accuracy, error rate and generation result, using different optimisers and advanced control systems. ### Backward Control System on CNN Before doing the classification task, we firstly simulate the step response of backpropagation based ANNs on each controller (optimiser). As observed in Figure 1(b) and Figure 1(c), AdaM optimiser can rapidly converge to the optimal but with an obvious vibration. Although FuzzyPID cannot rapidly converge to the optimal, there is no obvious vibration during the training. Other optimisers, such as HPF-SGD, SGDM and PID, perform lower than AdaM and FuzzyPID in terms of the training process. In Figure 1(a), the response of AdaM controller is faster than others, and FuzzyPID follows it. However, due to the overshoot on AdaM, the stability of ANN system when using the AdaM controller tends to be lower. This overshoot phenomenon is reflected on the training process of Adam optimising in Figure 1(b) and Figure 1(c). We summarize the result of classifying MNIST in Table 1. Under the same condition, SGD optimiser reaches the testing accuracy at \(91.98\%\), but other optimisers can reach above \(97\%\). FuzzyPID gets the highest training and testing accuracy rates using Guassian membership function. In Figure 2, if considering the rise time, the settling time and the overshoot, the fuzzy optimiser outperforms other optimisers. A better optimiser (or controller) that has inherited advanced knowledge and sometimes has been effectively designed is beneficial for the classification performance. ### Forward Forward Control System on FFNN We also simulate the control system of this proposed FFNN and compare its system response on different hyperparameters. In Figure 3, SGD controller still cannot reach the target, and AdaM controller reacts fastest approaching to the target. However, SGDM controller lags behind PID in terms of the step response. Because of the low frequency part of LPF-SGD, it climbs slower than HPF-SGD. Although the differential coefficient D of PID optimiser can help reduce overshoot and overcome oscillation and reduce the adjustment time, its performance cannot catch up with AdaM. Compared to Table 2, AdaM outperforms other optimisers in terms of error rates, and the performance of these seven optimisers are echoing Figure 2(a). A higher portion of positive samples can contribute to the classification, and a higher \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline **optimiser** & SGD & SGDM & Adam & PID & LPF-SGD & HPF-SGD & FuzzyPID \\ \hline **Training**\(Accuracy\) & \(91.48_{\pm 0.03}\) & \(97.78_{\pm 0.00}\) & \(99.46_{\pm 0.02}\) & \(99.45_{\pm 0.01}\) & \(11.03_{\pm 0.01}\) & \(93.35_{\pm 0.02}\) & \(99.73_{\pm 0.00}\) \\ **Testing**\(Accuracy\) & \(91.98_{\pm 0.05}\) & \(97.11_{\pm 0.02}\) & \(97.81_{\pm 0.10}\) & \(98.18_{\pm 0.02}\) & \(10.51_{\pm 0.03}\) & \(93.45_{\pm 0.09}\) & \(98.24_{\pm 0.10}\) \\ \hline \hline \end{tabular} \end{table} Table 1: The results of ANN based on the backpropogation algorithm on MNIST data. Using the 10-fold cross-validation, the average and standard variance results are shown below. Figure 2: The step response, training curve and loss curve using different controllers, such as SGD, SGDM, AdaM, PID, LPF-SGD, HPF-SGD and FuzzyPID optimisers. \(Threshold\) can benefit more. For the step response in Figure (c)c, although AdaM ( \(Threshold=0.5\), \(portion\) of positive samples is \(70\%\), and \(portion\) of negative samples is \(30\%\)) and AdaM ( \(Threshold=0.5\), \(portion\) of positive samples is \(50\%\), and \(portion\) of negative samples is \(50\%\)) rise fateset, the final results in Table 2 present that AdaM ( \(Threshold=5.0\), \(portion\) of positive samples is \(50\%\), and \(portion\) of negative samples is \(50\%\)) get a lower error rate. ### Backward-Forward Control System on GAN For the sample generation task, we also simulate the system response of GANs on each controllers (optimisers) and summarize the result in Figure 5. Apart from AdaM, LPF-SGD and HPF-SGD, all controllers have obvious noise, and interestingly, this phenomenon can be seen in Figure 4. The generated MNIST using Adam optimiser has no noise and can be easily recognized, and not surprised, the source response of AdaM in Figure 5 can finally converge. Figure 4 and Figure 5 mutually echo each other. Eventually, when using classical GAN to generate samples, AdaM should be the best optimiser to optimise the update of weights. The generated MNIST sample sometimes cannot be recognized, and GAN generates only same samples. One reason for this can be observed in Figure 5, where the sinusoidal signals generated by these four controllers, such as PID, LPF-SGD, HPF-SGD and FuzzyPID move up and down, potentially leading to an unstable and same generation output. ## 7 Discussion ### Why various optimisers are controllers during the learning process? Under the same training condition (e.g., same architecture and hyperparameters), corresponding optimisers can tackle with specific tasks. Residual connection used vision models prefer SGDM, HPF-SGD and PID optimisers (Seen from Figure 14 of Appendix F). There is an obvious overshoot on the step response of AdaM controller (Seen from Figure 10), and a similar vibration can be found in the testing curve of Figure 14 of Appendix F. The classification task always needs a rapid response to save learning resources, but if stability and robustness are the priorities, we should set others as the optimizer, such as PID or FuzzyPID optimiser, which under fuzzy logic adjustment, demonstrates a superior step response (can be seen from Figure 2a). Moreover, for the generation task, GAN satisfies AdaM optimiser. We found that the adaptive part of AdaM can rapidly adjust the learning process. However, other optimisers, such as SGD, SGDM and PID, generate samples with obvious noise and output the same samples make the generated sample cannot be recognized easily (can be seen from Figure 4 and Figure 5). For particular needs (e.g., Image-to-Image Translation), CycelGAN, this advanced generation system was proposed to generate samples from one data pool and to improve its domain adaption on the target data pool. Coincidentally, we found that CycleGAN has a preference for the PID optimiser. Therefore, it is necessary to design a stable and task-satisfied optimiser on a specificly designed learning system. However, given that the system functions of most learning systems are extremely complex, simulating their system responses has become a viable way to analyze them. We conclude that to achieve best performance, every ANN should use the proper optimiser according to its learning system. ### How various learning systems can be analyzed? Numerous advanced components have enhanced ANNs. Conducting a quantitative analysis on each of them can pave the way for the development of new optimisers and learning systems. For the classification task using a backward control system, in one node of the learning system, and in terms of analyzing a single component, the rise time, peak time, overshoot (vibration), and settling time [36; 27] can be the metrics to evaluate the performance of such component on learning systems. To visualize the learning process, FFNN was proposed by [13], and effectively, this forward-forward-based training system also can achieve competitive performance compared to backpropagation-based models. The \(Threshold\) - one hyperparameter - can significantly benefit the convergence speed, as it has the effect of proportional adjustment (same as a stronger P in PID controller). The portion of positive samples can slightly affect the classification result, as because the proportional adjustment is too weak on FFNN learning system (Seen from Equation 15). Additionally, the system response on various sources can also serve as a metric to evaluate the learning system. We conclude that there are two main branches to improve ANNs: **(1)** develop a proper optimiser; **(2)** design a better learning system. On the one hand, for example, the system response of GAN has high-frequency noise and cannot converge using SGD, SGDM and PID optimisers (seen from Figure 5). One possible solution is adding an adaptive filter. Thus, AdaM outperforms other optimisers on generating samples (Seen from Figure 4). The overshoot of AdaM and SGDM during the learning process of classification tasks can accelerate the convergence, but its side-effect of vibration brings us to PID and FuzzyPID. Therefore, developing a task-matched optimizer according to the system response determines the final performance of ANNs. On the other hand, to satisfy various task requirements, learning systems also should become stable and fast. For example, \(\theta_{G}(s)\) has two system functions as derived from Eq 19), to offset the side effect by considering the possible way using extra generator. That can explain why other advanced GANs using multi-generators (e.g., CycleGAN) can generate high-quality samples than the classical GAN. ## 8 Limitations Although we systematically proved that **(1)** the optimiser acts as a controller and **(2)** the learning system functions as a control system, in this preliminary work, there are three obvious limitations: **a.** we cannot analyze larger models due to the complexity introduced by advanced techniques; **b.** the system response of some ANNs (e.g., FFNN) may not perfectly align with their real performance; **c.** we cannot always derive the solution of complex learning system. ## 9 Conclusion In this study, we showed comprehensive empirical study investigating the connection between control systems and various learning systems of ANNs. We provided a systematic analysis method for several ANNs, such as CNN, FFNN, GAN, CycleGAN, and ResNet on several optimisers: SGD, SGDM, AdaM, PID, LPF-SGD, HPF-SGD and FuzzyPID. By analyzing the system response of ANNs, we explained the rationale behind choosing appropriate optimisers for different ANNs. Moreover, designing better learning systems under the use of proper optimiser can satisfy task requirements. In our future work, we will intend to delve into the the control system of other ANNs, such as Variational Autoencoders (VAEs), diffusion models, Transformer-based models and so on, aw well as the development of optimisers, as we believe the principles of control systems can guide improvements in all ANNs and optimisers.
2303.05972
Classifying the evolution of COVID-19 severity on patients with combined dynamic Bayesian networks and neural networks
When we face patients arriving to a hospital suffering from the effects of some illness, one of the main problems we can encounter is evaluating whether or not said patients are going to require intensive care in the near future. This intensive care requires allotting valuable and scarce resources, and knowing beforehand the severity of a patients illness can improve both its treatment and the organization of resources. We illustrate this issue in a dataset consistent of Spanish COVID-19 patients from the sixth epidemic wave where we label patients as critical when they either had to enter the intensive care unit or passed away. We then combine the use of dynamic Bayesian networks, to forecast the vital signs and the blood analysis results of patients over the next 40 hours, and neural networks, to evaluate the severity of a patients disease in that interval of time. Our empirical results show that the transposition of the current state of a patient to future values with the DBN for its subsequent use in classification obtains better the accuracy and g-mean score than a direct application with a classifier.
David Quesada, Pedro Larrañaga, Concha Bielza
2023-03-10T15:05:32Z
http://arxiv.org/abs/2303.05972v1
Classifying the evolution of COVID-19 severity on patients with combined dynamic Bayesian networks and neural networks ###### Abstract When we face patients arriving to a hospital suffering from the effects of some illness, one of the main problems we can encounter is evaluating whether or not said patients are going to require intensive care in the near future. This intensive care requires allotting valuable and scarce resources, and knowing beforehand the severity of a patients illness can improve both its treatment and the organization of resources. We illustrate this issue in a dataset consistent of Spanish COVID-19 patients from the sixth epidemic wave where we label patients as critical when they either had to enter the intensive care unit or passed away. We then combine the use of dynamic Bayesian networks, to forecast the vital signs and the blood analysis results of patients over the next 40 hours, and neural networks, to evaluate the severity of a patients disease in that interval of time. Our empirical results show that the transposition of the current state of a patient to future values with the DBN for its subsequent use in classification obtains better the accuracy and g-mean score than a direct application with a classifier. keywords: Dynamic Bayesian networks, Neural networks, Forecasting, Classification, COVID-19 + Footnote †: journal: Journal of LaTeX Templates ## 1 Introduction Throughout the COVID-19 pandemic, healthcare systems all around the world have suffered a staggering pressure due to the sheer number of infected patients that arrived to medical centers. The nature of this pandemic was such that patients could range from completely asymptomatic to presenting critical respiratory issues. As such, and given that the amount of resources in medical centers is limited, it was a crucial task to discern whether or not a patient presented symptomatology that could devolve into a critical condition or into only mild afflictions. The issue of predicting the clinical outcome of COVID-19 patients has seen much interest in recent years. Some authors opted for discerning the severity of the illness depending on certain comorbidities like heart failure [1], neurodegenerative diseases [2], cardiovascular diseases [3], or chronic pulmonary diseases [4]. These studies have shown that comorbidities related to COVID-19 increase the risk of death of a patient. As such, many efforts are also put into preprocessing clinical data and selecting an appropriate set of variables that define the effect of the illness. From the point of view of predicting the outcome from data, many machine learning approaches have been tested in the literature. Some authors opted for performing a statistical analysis and applying logistic regression for classifying mortality [2; 5; 6; 7]. Another popular approach consists of training simple perceptron or multilayer neural network models to approximate a function that relates the variables in the system and classifies patient instances [8; 9; 10; 11]. Tree-based models like random forests [10; 12; 13; 14] or XGBoost [10; 15; 16; 17] are also some of the most popular and best performing tools for this task. In the case of interpretable models, Bayesian networks have also been applied to predicting the severity of COVID-19 on patients while also trying to gain some insight on the problem at hand [18; 19] Another possible approach is to view the problem as a time series forecasting issue. Each patient that arrives at a hospital has its vital signs measured and has blood analysis performed on them. Afterwards, if the patient is not discharged and requires further care, new recordings are performed on a semi-regular basis. This generates time series data of each patient, where measurements are taken over a period of several hours each until either the patient overcomes the illness or passes away. In this scenario, time series models can be applied to forecast the state of a patient and predict whether they will be suffering from severe symptoms in the near future or not. This approach has also been explored in the literature with models like dynamic Bayesian networks [20], recurrent neural networks [21] and dynamic Markov processes [22]. In this work, we took a hybrid approach between static and dynamic models. We used data recovered from patients infected with the sixth Spanish COVID-19 wave that arrived to the Fundacion Jimenez Diaz hospitals in Madrid. After preprocessing this data and selecting an appropriate variable set, we trained hybrid models between dynamic Bayesian networks (DBN) as forecasting models and neural networks (NN) as classifier models. The main idea of our proposal is to obtain the first vital signs and blood analysis from a patient and then perform forecasting of these variables with the DBN model up to a certain point in the near future. Afterwards, we can use the classifier model to identify the forecasted values as critical or not critical. This procedure can help identifying whether a patient that just arrived to triage in a medical center is going to worsen significantly in the following days. The rest of this paper is organized as follows. Section 2 gives some background on dynamic Bayesian network models. Section 3 explains the architecture of the hybrid model with the neural network, where this classifying model is interchangeable with any other static classifier. Section 4 shows the experimental results of the tested models. Finally, Section 5 gives some conclusions and introduces future work. ## 2 Dynamic Bayesian networks Dynamic Bayesian networks [23] are a type of probabilistic graphical model that represent conditional dependence relationships between variables using a directed acyclic graph. They extend the framework of Bayesian networks to the case of time series. Similarly to static BNs, each of the nodes in the graph represents a variable in the original system and the arcs represent their probabilistic relationships. In the case of DBNs, time is discretized into time slices that represent consecutive instants. This way, we have a representation of all the variables in our system across time. Let \(\mathbf{X}^{t}=\{X_{0}^{t},X_{1}^{t},\ldots,X_{n}^{t}\}\) be the set of all the variables in the time slice \(t\). Then, we can define the joint probability distribution of the network up to some horizon \(T\) as: \[p(\mathbf{X}^{0},\ldots,\mathbf{X}^{T})\equiv p(\mathbf{X}^{0:T})=p(\mathbf{ X}^{0})\prod_{t=0}^{T-1}p(\mathbf{X}^{t+1}|\mathbf{X}^{0:t}), \tag{1}\] where \(p(\mathbf{X})=\prod_{i=0}^{n}p(X_{i}|\mathbf{Pa}_{i})\) represents the probability distribution of a set of nodes \(\mathbf{X}\) and \(\mathbf{Pa}_{i}\) represents the set of parent nodes of \(X_{i}\) in the graph. However, in Equation 1 all time slices \(\mathbf{X}^{0:T}\) have to be taken into account to calculate the joint probability distribution. In this scenario, it is very common to assume that the future state of the system is independent of the past given the present. A DBN that follows this assumption is called a first-order Markovian network. This implies that only the last instant is used to calculate the next one and it simplifies the calculation of the joint probability distribution greatly: \[p(\mathbf{X}^{0:T})=p(\mathbf{X}^{0})\prod_{t=0}^{T-1}p(\mathbf{X}^{t+1}| \mathbf{X}^{t}). \tag{2}\] An example of the structure of a DBN with Markovian order 1 is shown in Fig. (1). One advantage that DBN models present is that they do not need to be trained with time series of constant length. Due to the Markovian order assumption in Equation (2), we only need to recover several batches of two consecutive instants from the original dataset to learn the structure and parameters of the network. We can use several time series with different lengths recovered from the same stochastic process to train a DBN model from data. The reason for this is that we only need the values of the variables inside the temporal window defined by the Markovian order to train our model, so the total length of the time series is not relevant in the Figure 1: Example of the structure of a first-order Markovian DBN with two time slices \(t_{0}\) and \(t_{1}\). To calculate the future values in \(t_{1}\), we would only need to know the current values of our variables in \(t_{0}\). learning phase. This helps when applying this kind of model to real-world problems, where the length of the data from processes can vary depending on circumstances outside of the system. ## 3 Combining DBNs and static classifiers When we predict COVID-19 severity on patients in the near future, we face several issues. On one hand, we only have the data of their vital signs and blood analysis when the patient first arrives at the hospital. As we are interested on their state on the following days, we need to forecast the evolution of these variables over time. On the other hand, we need a mechanism that identifies given a state vector of a patient whether they are in a critical state or not. ### Forecasting the state vector When a patient afflicted with COVID-19 stays in intensive care for a prolonged period of time, they are monitored and new readings of their vital signs and blood analysis are recorded on a semi-regular basis of several hours. All the variables in these instances form a state vector \(\mathbf{S}=[s_{0},s_{1},\ldots,s_{n}]\) at each point in time, and the final data recovered from a patient \(k\) is a vector of instances \(\mathbf{P}_{k}=\left[\mathbf{S}^{0},\mathbf{S}^{1},\ldots,\mathbf{S}^{T}\right]\) ordered in time from the oldest vital sign readings and blood analysis to the most recent ones. When we combine several patients data, it generates a time series dataset that can be used to train a time series forecasting model. It is worth noting that the length \(T\) of the data from each patient depends on the time they spent in the hospital. If a patient is discharged with only one vital sign reading and blood analysis, then we do not have data with a time component. In this situation, this patient could not be used for training our temporal model. Given that in our case all the variables in a state vector \(\mathbf{S}^{t}\) are continuous, we will use a Gaussian DBN to model the dependencies and to perform forecasting. A DBN model can help us gain some insight on which variables have a greater impact on the evolution of a patient. Furthermore, the ability of DBNs to be trained with different length time series after deciding a Markovian order is also relevant in this problem, given that the number of instances per patient varies greatly. By setting a Markovian order 1, we will be able to use the data from all patients except the aforementioned ones with a single reading, where no temporal data at all can be used. After training the DBN model, we can use it to forecast the state vector of a patient up to a certain point in the future. This forecasting represents an estimate of the evolution that the patient will undergo, and it can be used to assess whether it will lead to severe symptoms or not. This process effectively gives an estimate of the future vital signs and blood analysis of a patient without spending additional resources and time on it. ### Classifying critical values The task of evaluating whether a patient is in a critical state of the COVID-19 infection has been performed in the literature mainly through some kind of medical score [24] or by labelling instances due to some external indicator, for example being transferred to the intensive care unit. If we obtain a labelled dataset of patients through any of these methods, we can then take a machine learning approach by training classifier models that identify whether a patient is in a critical state given their state vector \(\mathbf{S}\). If we combine this approach with the forecasting of the state vector, we get a hybrid model between static classifiers and time series models that is capable of evaluating the present and near future condition of a person suffering from COVID-19. When a patient arrives at a hospital and gets their vital signs and blood analysis recorded, we obtain the state vector \(\mathbf{S}^{0}\) of the very first instant of time. Then we can feed \(\mathbf{S}^{0}\) to a trained classifier model to evaluate whether this patient is already in a critical state or not. If this is not the case, we can then use \(\mathbf{S}^{0}\) as the starting point for our DBN to perform forecasting. This will return us the values of \(\mathbf{S}^{1},\mathbf{S}^{2},\ldots,\mathbf{S}^{t}\) up to a certain point \(t\) in time. All these state vectors can in turn be classified to evaluate the expected severity of the symptoms in that patient. With this method, we can see if a patient is expected to end up suffering from critical COVID-19 and when approximately will this situation occur. To illustrate this whole process, a schematic representation of this framework can be seen in Fig. (2). Our proposed framework supports any kind of classifier that is able to produce a discrete prediction given a continuous state vector \(\mathbf{S}^{t}\). We used a modular implementation where the classifier used can be a support vector machine, an XGBoost, a neural network and a Bayesian classifier. All these classifiers have seen use in the literature and could find applications where one is more effective than the others. Due to this architecture, any other classifier model could potentially be introduced as a new module if the need arises. In our case, the architecture that was most effective was the combination with a neural network. The network had an internal structure of 5 hidden dense layers with 64, 32, 16, 16 and 8 neurons each. They all used RELU activation functions and had their weights initialized with the identity. The last layer used a single neuron with a sigmoid activation function for binary classification. A result greater than 0.5 is equated to predicting a critical status for a patient, and a result lesser or equal to 0.5 predicts a non-critical scenario. A representation of this structure can be seen in Fig. (3). ## 4 Experimental results For our experiments, we used a dataset consisting of anonymous data recovered from 4 different Spanish hospitals from the Fundacion Jimenez Diaz in Madrid. After preprocessing it, we used this data to fit our proposed model and evaluate its capabilities to predict the future critical status of patients suffering from COVID-19 infections. Figure 2: Schematic representation of the classifier-DBN framework. After obtaining a state vector \(\mathbf{S}_{0}\) from a patient, we can use it to forecast the next \(t\) state vectors with the DBN model and check if they are critical with our static classifier. ### Preprocessing Our raw dataset covers the period from the 27th of October 2021 to the 23rd of March 2022. In total, there are 21.032 rows with incomplete data from 15.858 patients and 532 variables, most of which present missing values for the majority of patients. This is a common occurrence in a medical dataset of these characteristics, given that not the same tests are performed to all the patients and some of the results have to be recorded manually. This data covered patients that had confirmed cases of COVID-19 via a positive PCR test. The consecutive rows in the dataset that correspond to a same patient are ordered in time forming time series sequences. However, the frequency at which the instances were recorded is uneven. This is due to the fact that performing blood analysis from patients and obtaining the results does not take a fixed amount of time and is not always performed after fixed intervals. To tackle this issue, we established a period of 4 hours between each row and formed batches of instances where missing data was filled with the average values of the rest of instances in the same batch. This 4 hour period was chosen because usually new tests were performed on average roughly after every 4 hours in our dataset. From the 21.032 rows, 13.971 were from patients that appear only in a single instance, where the vast majority were discharged from the hospital Figure 3: Structure of the neural network model used in the experiments. afterwards due to mild symptomatology and only 48 of these patients passed away. This data cannot be used to train the DBN models, given that a single register is not enough to form a time series sequence. However, it will be used to train the classifier models. From the remaining patients with more than a single instance, the majority of them have either two or three rows of recorded values. To illustrate this, we show a histogram with the distribution of the number of instances per patient in Fig. (4). Regarding the 532 variables in our dataset, most of them correspond to specific values in uncommon tests and analysis, and they have over 70% of missing values across all instances. In our case, we have opted for reducing the number of variables to only those that are obtained from the vital signs of a patient, like their body temperature and their heart rate, their descriptive characteristics like age, gender and body mass index, and the variables from Figure 4: Histogram with the number of instances per patient greater than 1 in the dataset. Inside the last bracket we have grouped all the patients with 10 or more instances. A higher number of instances indicates a longer stay in the hospital and as such a more severe case of COVID-19, which is far less common than a mild case. a regular blood analysis like the albumin and D-dimer values. All these variables are routinely taken when a patient arrives at urgent care and obtaining them does not pose a severe expense of resources. This reduced the number of variables to 62, and from those we chose to retain the vital sign readings and the descriptive characteristics, while allowing feature subset selection on the blood analysis related variables. This subset selection was performed via random forest importance on classification on our objective variable, which will be whether or not a patient was put in the intensive care unit or passed away. This is what defines our critical cases of COVID-19, which are only a 18.8% of the total number of patients in our dataset. ### Experiment results In this section we show the experimental results obtained with different combinations of classifier-DBN models. For our experiments, we used an XGBoost, a support vector machine, a neural network and a Bayesian classifier. In particular, this Bayesian classifier is a tree-augmented naive Bayes built following the hill climbing super-parent (HCSP) algorithm [25]. All the project was coded in R and is publicly available online in a GitHub repository1. The dataset used is not made public due privacy and legal reasons. Footnote 1: [https://github.com/dkesada/Class-DBN](https://github.com/dkesada/Class-DBN) Regarding the software we used in our experiments, the DBN models where trained using our own public package "dbnR"2, the XGBoost models where trained with the "xgboost" package [26], the support vector machines where trained with the "e1071" package [27], the neural networks with the "keras" R interface [28] and the Bayesian classifiers were trained with the "bnclassify" package [29]. The parameters of each classifier were optimized using differential evolution with the R package "DEoptim" [30] based on the geometric mean (g-mean) [31] of the models. This metric is defined as \(g_{m}=\sqrt{recall*specificity}\), which uses all values in the resulting confusion matrix when calculating the final score. Using both the recall and the specificity of the predictions ensures that the imbalance between critical cases and non-critical cases is taken into account when optimizing the parameters. We do not want a model optimized solely on accuracy because it would lead to models that only predict the majority class of non-critical for all patients. Footnote 2: [https://github.com/dkesada/dbnR](https://github.com/dkesada/dbnR) To alleviate the issue of imbalanced data, we also applied SMOTE oversampling with the 'DMwR' package [32; 33] to synthetically generate in stances of both critical and non-critical cases. This is a common practice that creates synthetic data to offset the difference between the number of instances of the majority and minority classes. In our case, we will use SMOTE to create modified datasets for training our classifiers. This will help the models to avoid getting stuck on predicting the majority non-critical class for almost all instances. To test our hybrid models, we take the state vector of a patient in an instance and forecast up to 10 instants into the future with the DBN model. Then, we use the classifier model to classify each of this forecasts as critical or not and we compare the predicted label with the true label of the instance. Given that each instance is separated from the next one by 4 hours, in total we forecast 40 hours into the future with the DBN model. With this method, we will be able to see the behaviour of the classifiers and the changes in accuracy and g-mean as we use state vectors from further into the future. The average results obtained across all forecasts of the models can be seen in Table 1. The results in Table 1 show that, on average, the most accurate model is the neural network in both accuracy and g-mean. The performance of both the SVM and the HCSP are very similar in terms of accuracy, but the difference in g-mean score of the SVM shows that it is able to discern better the more uncommon critical instances. For this particular case, although the XGBoost model is very popular in the literature, it obtains worse overall results than the rest of the classifiers. In our experiments, due to the imbalance between classes we had to find a compromise between the global accuracy and the accuracy of the minoritary class. If left unchecked, the models would become biased to the majoritary class and predict almost unanimously every \begin{table} \begin{tabular}{l c c c c} \hline \hline & **Accuracy** & **g-mean** & **Train (h)** & **Exec (s)** \\ \hline XGBoost & 0.698 & 0.455 & 1.950 & 9.634 \\ SVM & 0.735 & 0.522 & 1.145 & 9.654 \\ NN & 0.771 & 0.541 & 1.384 & 9.863 \\ HCSP & 0.736 & 0.468 & 1.046 & 9.878 \\ \hline \hline \end{tabular} \end{table} Table 1: Mean results in terms of the accuracy, g-mean score, training and execution time of the models on average for all the experiments. It is worth noting that training time includes optimization of parameters, which involves the creation of multiple models to evaluate different configurations. single instance as non-critical, invalidating the use of the model while obtaining accuracies close to 90%. By using the g-mean as optimization metric in combination with the SMOTE oversampling, we were able to alleviate this problem. A high accuracy on the majoritary class of non-critical patients will be able to help reduce the oversaturation of ICU resources, given that all models can evaluate whether a patient will reach a critical state of the COVID-19 infection or not in less than 10 seconds. On the other hand, being able to discern the few critical cases that arise is also needed to help doctors determine which patients need more specific care to try to reduce the mortality rate. On the topic of training time, training and tuning the models takes on average between one and two hours. Given that these kind of models should not need to be retrained until some significant issue happens with the disease, like a new variant or new specific symptoms appear on patients that differ from the training data used, this training times should be reasonable to be performed once. Given that the model with the NN obtains the best average results, we show in Fig. (5) the details of its performance depending on the time horizon. The first instant at 0 hours is equivalent to performing classification with the NN model directly to the state vector obtained from the patient. From Figure 5: Classification results of the neural network model as we feed it state vectors further ahead in time with the DBN model. The classification performance of the neural network improves monotonically by combining it with the DBN forecastings. there, we perform forecasting up to 40 hours with the DBN model of this state vector and use the results as input for the NN model. We can see that the NN model performs considerably better if we pair it with the DBN to classify the forecasted state of the patients rather than their initial state. As we forecast the state vector of patients further into the future, the NN improves its classification performance monotonically. In addition, DBNs perform multivariate inference and are interpretable models. This allows them to offer doctors the forecasted values of any variable in the system as well as the underlying relationships between the rest of variables that led to those results. In the case of relevant values like the oxygen saturation of a patient, which is a good indicator of the state of a patient Figure 6: Subset of relevant variables to the forecasting of maximum oxygen saturation (light blue) in the DBN model. The initial and maximum oxygen saturation variables from the last instant (in red) affect the calculation of the next maximum oxygen saturation value. Other variables like body temperature, systolic and diastolic blood pressures and heart rate also influence this value in the forecast. suffering from respiratory issues, we show an example of the relationships present in the DBN model in Fig. (6). This subgraph shows the variables directly related with the maximum oxygen saturation registered in a 4 hours interval. We can see the previous maximum value of oxygen saturation from the last instant, which is to be expected due to the autoregressive component of time series. On a similar note, the initial values of oxygen saturation registered serve the model to define the range of the maximum value: lower initial oxygen saturation will likely lead to lower maximums and vice versa. Additionally, we also find in that the body temperature leading to fever, maximum diastolic blood pressure and minimum heart rate are also direct indicators of maximum oxygen saturation and play an important role in its forecasting. Lastly, minimum systolic and diastolic blood pressures are also affected. A lower level of oxygen saturation will cause higher blood pressure, increasing both minimums. This situation is reflected in the fact that this values are child nodes that depend on the current value of oxygen saturation. ## 5 Conclusions In this work we have presented a hybrid model between DBNs and static classifiers where the state vector recovered from a patient suffering from COVID-19 is used to forecast their future state. This information is then used to assess how severe will their infection be in the future 40 hours based on their current vital signs and blood analysis. This method shows the best performance when combining DBN and NN models. While the NN is capable of discerning whether or not a patient will reach a critical state with better accuracy than the other classifiers, the DBN adds an explainable layer regarding the variables extracted from the patient. This model could help doctors decide whether or not a patient needs further specialized care and allow for a better organization of the resources available in medical centers. Additionally, we offer the code of all our models online for future reference and use. For future work, this model could be applied in different industrial environments that require forecasting time series and classifying the state of the system. The combination of a generative model that forecasts the state of a system with a classifier model that evaluates this expected future state is a promising framework that could prove useful in applications like remaining useful life estimations. Another possible improvement could be the potential use of the DBN model as a simulator, introducing interventions in the forecasting in order to see the effects that possible actions can have in the expected future. In the medical case, the effects of specific meds or treatments could potentially be reflected in the DBN predictions, and in other industrial cases this could lead to optimizing the expected future based on possible interventions in the initial state. ## Acknowledgements This work was partially supported by the Madrid Autonomous Region through the "MadridDataSpace4Pandemics-CM" (REACT-EU) project. We are also grateful to the Fundacion Jimenez Diaz for providing the data for this work and to the doctors Sara Heili and Lucia Llanos for their valuable insights.
2302.06405
An Optical XNOR-Bitcount Based Accelerator for Efficient Inference of Binary Neural Networks
Binary Neural Networks (BNNs) are increasingly preferred over full-precision Convolutional Neural Networks(CNNs) to reduce the memory and computational requirements of inference processing with minimal accuracy drop. BNNs convert CNN model parameters to 1-bit precision, allowing inference of BNNs to be processed with simple XNOR and bitcount operations. This makes BNNs amenable to hardware acceleration. Several photonic integrated circuits (PICs) based BNN accelerators have been proposed. Although these accelerators provide remarkably higher throughput and energy efficiency than their electronic counterparts, the utilized XNOR and bitcount circuits in these accelerators need to be further enhanced to improve their area, energy efficiency, and throughput. This paper aims to fulfill this need. For that, we invent a single-MRR-based optical XNOR gate (OXG). Moreover, we present a novel design of bitcount circuit which we refer to as Photo-Charge Accumulator (PCA). We employ multiple OXGs in a cascaded manner using dense wavelength division multiplexing (DWDM) and connect them to the PCA, to forge a novel Optical XNOR-Bitcount based Binary Neural Network Accelerator (OXBNN). Our evaluation for the inference of four modern BNNs indicates that OXBNN provides improvements of up to 62x and 7.6x in frames-per-second (FPS) and FPS/W (energy efficiency), respectively, on geometric mean over two PIC-based BNN accelerators from prior work. We developed a transaction-level, event-driven python-based simulator for evaluation of accelerators (https://github.com/uky-UCAT/B_ONN_SIM).
Sairam Sri Vatsavai, Venkata Sai Praneeth Karempudi, Ishan Thakkar
2023-02-03T20:56:01Z
http://arxiv.org/abs/2302.06405v2
# An Optical XNOR-Bitcount Based Accelerator for Efficient Inference of Binary Neural Networks ###### Abstract Binary Neural Networks (BNNs) are increasingly preferred over full-precision Convolutional Neural Networks (CNNs) to reduce the memory and computational requirements of inference processing with minimal accuracy drop. BNNs convert CNN model parameters to 1-bit precision, allowing inference of BNNs to be processed with simple XNOR and bitcount operations. This makes BNNs amenable to hardware acceleration. Several photonic integrated circuits (PICs) based BNN accelerators have been proposed. Although these accelerators provide remarkably higher throughput and energy efficiency than their electronic counterparts, the utilized XNOR and bitcount circuits in these accelerators need to be further enhanced to improve their area, energy efficiency, and throughput. This paper aims to fulfill this need. For that, we invent a single-MRR-based optical XNOR gate (OXG). Moreover, we present a novel design of bitcount circuit which we refer to as Photo-Charge Accumulator (PCA). We employ multiple OXGs in a cascaded manner using dense wavelength division multiplexing (DWDM) and connect them to the PCA, to force a novel Optical XNOR-Bitcount based Binary Neural Network Accelerator (OXBNN). Our evaluation for the inference of four modern BNNs indicates that OXBNN provides improvements of up to 62\(\times\) and 7.6\(\times\) in frames-per-second (FPS) and FPS/W (energy efficiency), respectively, on geometric mean over two PIC-based BNN accelerators from prior work. We developed a transaction-level, event-driven python-based simulator for evaluation of accelerators ([https://github.com/uky-UCAT/B_ONN_SIM](https://github.com/uky-UCAT/B_ONN_SIM))1. Footnote 1: To Appear at IEEE ISQED 2023 ## I Introduction Convolutional Neural Networks (CNNs) have revolutionized the implementation of various artificial intelligence tasks, such as image recognition, language translation, and autonomous driving [1, 2], due to their high inference accuracy. However, the heavy computation and storage requirements of CNNs still limit their application in practice. Therefore, to improve the speed and efficiency of CNN inference, model compression techniques such as quantization are widely employed [3, 4, 5]. Quantization techniques create compact CNNs compared to their floating-point counterparts by representing the weights/inputs of CNNs with lower precision. The extreme end of the quantization is binarization, i.e., a 1-bit quantization, that allows only two possible values for both inputs and weights, either -1(0) or +1. Binarization replaces the heavy floating-point vector-dot-product operations (which constitute convolution operations in CNNs) with simple bit-wise XNOR and bitcount operations [6]. Since bit-wise XNOR and bitcount are lightweight operations, binarized CNNs, referred to as binary neural networks (BNNs), provide efficient hardware implementations. Among the BNN hardware implementations from prior works, the silicon-photonic accelerators have shown great promise to provide unparalleled parallelism, ultra-low latency, and high energy efficiency [7, 8]. Prior work [7] utilizes microdisks to realize XNOR-Bitcount processing cores (XPCs) that process the input and weight vectors, whereas [8] uses Microring Resonators (MRRs) in its XPCs to perform XNOR-Bitcount operations. However, these prior works face two shortcomings. First, they use at least two MRRs or microdisks to achieve 1-bit XNOR operation, which increases their area and energy consumption. Second, because of the limited scalability of their XNOR and bitcount circuits, they are forced to decompose the input and weight vectors into a large number of smaller slices before processing them. This generates a large number of partial sums (_psums_). Accumulating such a large number of _psums_ to obtain the final result, using a _psum_ reduction network, can incur a very high latency overhead. To address these shortcomings, this paper presents a novel Optical XNOR-Bitcount based Binary Neural Network Accelerator (OXBNN). OXBNN employs a novel design of optical XNOR gates (OXGs). Our OXG uses a single MRR to perform a 1-bit XNOR operation, thereby reducing the area and energy consumption compared to prior works. Moreover, OXBNN employs a novel bitcount circuit, referred to as Photo-Charge Accumulator (PCA), which inherently supports the accumulation of a very high number of _psums_, thereby eliminating the need of using external _psum_ reduction networks, to consequently reduce the overall latency and energy consumption of BNN processing. Our key contributions in this paper are summarized below. * We present our invented, novel BNN accelerator called OXBNN, which employs an array of single-MRR-based optical XNOR gates (OXGs) and highly scalable bitcount circuits called Photo-Charge Accumulators (PCAs); * We perform detailed modeling and characterization of our invented OXGs and PCAs using photonics foundry-validated, commercial-grade, photonic-electronic design automation tools (Section III); * We perform a scalability analysis for our OXBNN and describe a pertinent mapping scheme (Section IV); * We implement and evaluate OXBNN at the system-level with our in-house simulator ([https://github.com/uky-UCAT/B_ONN_SIM](https://github.com/uky-UCAT/B_ONN_SIM)), and compare its performance with two well-known photonic BNN accelerators from prior works, for the inferences of four state-of-the-art BNNs (Section V). ## II Preliminaries ### _Binary Neural Networks (BNNs)_ BNNs are specific types of CNNs that employ quantization techniques [9] to quantize the weights and inputs to 1-bit values, reducing the storage requirements and computational effort for improved energy efficiency of model inference. With binary quantization, the weights and inputs can only assume two possible values, either -1 or 1 [6, 10]. In general, the _sign_ function is the most widely used binary quantization function (Q): \[Q(x)=sign(x)=x\geq 0\?+1:-1 \tag{1}\] Like for CNNs [11], a convolution operation for BNNs is also typically decomposed into multiple vector-dot-product (VDP) operations. Each VDP operation of a BNN occurs between two vectors, the individual elements of which are first binarized using Eq. (1). Then, the VDP operation between a binarized weight vector \(W\) and a binarized input vector \(I\) can be realized in two steps, in this given order: (i) element-wise (i.e., bit-wise) XNOR of \(I\) and \(W\) that produces an XNOR vector; (ii) bitcount of the XNOR vector. This VDP operation is captured in Eq. 2. \[z=W\odot I=\sum_{i=1}^{S}W_{i}\odot I_{i} \tag{2}\] Here, \(W_{i}\) and \(I_{i}\), respectively, are the individual bit-elements at index i of the binarized vectors \(W\) and \(I\) of size \(S\) each; \(\odot\) denotes the VDP operation (XNOR operation) between binarized vectors \(I\) and \(W\) (bit-elements \(W_{i}\) and \(I_{i}\)); \(\sum\) represents the bitcount operation. **Using {0,1} instead of {-1,1}:** If binary value set {-1,1} is used, obtaining the activation values for the next BNN layer after a convolution operation requires \(sign(z)\) for each bitcount result \(z\). On the other hand, if binary value set {0,1} is used, obtaining the activation values for the next BNN layer after a convolution operation requires \(compare(z,0.5\times z_{max})\)=\(z>0.5\times z_{max}\)\(71:0\) for each bitcount result \(z\), where \(z_{max}\) is the size of the binarized vectors \(I\) and \(W\). ### _Processing of BNNs on Hardware_ Fig. 1(a) illustrates the convolution between a 3\(\times\)3 weight channel and a 5\(\times\)5 input channel. During the convolution, based on the stride parameter, the weight channel slides over the input channel and performs inner products with multiple input channel windows (e.g., four input channel windows are shown in Fig. 1(a) with red, blue, yellow, and green borders), generating one output value per input channel window. From Fig. 1(b), to perform one such inner product (i.e., corresponding to the input channel window highlighted in green in Fig 1(a)), the input channel window and weight channel are flattened into input and weight vectors of size _S_=9 each. Then, a bitwise XNOR circuit, with a total of _N_=_S_=9 XNOR gates, is employed to generate an XNOR vector. A bitcount circuit then counts the bits in the XNOR vector to evaluate the corresponding inner product output. However, the hardware size _N_\(\neq\)_S_ often. For example, in Fig. 1(c), _S_=9 and _N_=5. In this case, both the input and weight vectors (_S_=9 each) are decomposed into two slices each: Slice 1 with _S_=5 and Slice 2 with _S_=4. These slices are then mapped onto two bitwise XNOR circuits with _N_=5 each, as shown in Fig. 1(c), to consequently produce two XNOR vector slices. Applying bitcount on these XNOR vector slices generates two partial sums (_psums_), i.e., \(psum^{1}\) and \(psum^{2}\). \(psum^{1}\) and \(psum^{2}\) are then sent to a _psum_ reduction network to generate the corresponding inner product output. The addition of the _psums_ by the _psum_ reduction network incurs additional latency and energy overheads while processing BNNs. ### _Related Work on Optical BNN Accelerators_ To accelerate CNN inferences with low latency and low energy consumption, prior works proposed various accelerators Fig. 1: (a) Illustration of a convolution between a weight and input channel in a Binary Neural Network. Bit-wise XNOR and bitcount operations between a flattened weight vector and input vector, (b) when _S_=_N_=9, and (c) when _N_=5, _S_=9; each input and weight vector of _S_=9 is split into two slices (Slice 1 with _S_=5 and Slice 2 with _S_=4). Binary value set {-1,1} is used in this example. based on photonic integrated circuits (PICs) (e.g., [12, 13, 14]). These accelerators can be classified as incoherent (e.g., [11, 12, 13]) or coherent (e.g., [15, 16]). Because of the inherent advantages of incoherent accelerators [13, 17], the BNN-specific incoherent accelerators [8] and [7] were reported. _These optical BNN accelerators from prior works employ binary value set {0,1}_. [8] proposes broadcast and weight styled [18] XNOR-Bitcount circuits, which use heterogeneous MRRs to mitigate fabrication process variations. In contrast, the microdisk-based accelerator [7] proposes an all-optical XNOR-Bitcount circuit that uses optical XNOR gates, optical analog-to-digital converters (ADCs), and PCM-based racetrack memory to enable processing at a very high datarate. However, both [8] and [7] require at least two MRRs or microdisks to perform a 1-bit XNOR operation (in [7], one additional MRR/microdisk is required to modulate the optically applied input operand). Therefore, their XNOR circuits occupy high area and consume high energy. In addition, the bitcount circuits of these prior works can evaluate only one _psum_ at a time by counting the bits of one XNOR vector slice at a time. Therefore, these circuits have to store the individual _psum_s temporarily in memory. Once sufficient _psum_s are collected, they can be sent to a _psum_ reduction network to produce the final result. Thus, the bitcount circuits from prior works incur high memory footprint for storing _psum_s, and high latency and energy for processing _psums_. Our OXBNN accelerator addresses these shortcomings of prior works. ## III Our Proposed OXBNN Architecture ### _Overview_ The main processing unit of our OXBNN architecture is an XNOR-Bitcount Processing Core (XPC), which is illustrated in Fig. 2. Our XPC has an array of total \(N\) single-wavelength laser diodes (LDs), with each LD sourcing optical power of \(P_{\lambda i}^{in}\) amount at a distinct wavelength \(\lambda_{i}\). The total power from all \(N\) LDs (at wavelengths \(\lambda_{1}\) to \(\lambda_{N}\)) multiplex into a single photonic waveguide through wavelength division multiplexing (WDM). The optical power containing all these \(N\) wavelengths is split into \(M\) input waveguides, each of which connects to an XNOR-Bitcount Processing Element (XPE) (Fig. 2). An XPC contains a total of \(M\) XPEs. ### _XNOR-Bitcount Processing Element (XPE)_ From Fig. 2, an XPE in our OXBNN architecture contains two parts: _(i)_ an array of a total of \(N\) Optical XNOR Gates (OXGs) that generates an XNOR vector (or an XNOR vector slice) containing \(N\) optical bits, and _(ii)_ our invented Photo-Charge Accumulator (PCA) that performs bitcount on the generated XNOR vector (or XNOR vector slice). The value \(N\) here, which is equal to the number of wavelengths and number of OXGs per XPE, is referred to as the size of the XPE. #### Iii-B1 Array of Optical XNOR Gates (OXGs) In an XPE, an array of a total of \(N\) OXGs couples to an input waveguide as shown in Fig. 2. Each OXG operates upon a unique wavelength \(\lambda_{i}\) traversing the input waveguide. Each OXG in the array electrically receives two binary operands (i.e., input bit \(i_{1}^{N}\) and weight bit \(\text{w}_{1}^{N}\)) from its corresponding drivers (not shown in the figure). The array of OXGs performs a bit-wise logical XNOR between an _N_-bit input vector slice \(I_{1}\) = \(\{i_{1}^{1},i_{1}^{2},..,i_{1}^{N}\}\) and an _N_-bit weight vector slice \(W_{1}\) = \(\{w_{1}^{1},w_{1}^{2},..,w_{1}^{N}\}\) to produce a resultant _N_-bit XNOR vector slice. Each OXG in the array produces one bit of the resultant XNOR vector slice, and it imprints this bit on its corresponding \(\lambda_{i}\) (by modulating the optical transmission at \(\lambda_{i}\)) to be consequently guided to Fig. 2: Schematic of an XNOR-Bitcount Processing Core (XPC) of our OXBNN accelerator. Our OXBNN employs binary value set {0,1}. the bitcount circuit (i.e., PCA) via the output waveguide. As a result, the PCA receives the \(N\) individual optical bits of the _N_-bit XNOR vector slice concurrently on \(N\) distinct wavelengths. The PCA performs bitcount on these optical bits, as explained later. This entire processing step, from the bit-parallel application of the binary input and weight vector slices at the electrical input terminals of the array of \(N\) OXGs to the generation of the bitcount result by the PCA, takes very low latency because of the light-speed operation of the XPE. _We refer to this processing step mapped on an XPE as a_ PASS _and the corresponding latency as_ \(\tau\)_. Thus, our XPE can produce one bitcount result for one XNOR vector slice in every single PASS with_ \(\tau\) _latency. Since_ \(\tau\) _can be very low (as low as 20 ps), our XPE can achieve very high processing throughput by completing one PASS every_ \(\tau\) _period. For that, multiple input and weight vector slices_ \(\{I_{1},I_{2},..,I_{\alpha}\}\) _and_ \(\{W_{1},W_{2},..,W_{\alpha}\}\) _can be applied to the array of OXGs of an XPE in a serial manner at the predefined data rate (DR) of_ \(\frac{1}{\tau}\)_. The design and operation of an OXG and PCA are explained next._ _Design of an Optical XNOR Gate (OXG): The design of our invented Optical XNOR Gate (OXG) is illustrated in Fig. 3(a). It is an add-drop microring resonator (MRR), which has two operand terminals (realized as embedded PN-junctions) that can take two operand bits_ \(i\) _and_ \(w\) _as inputs for a predefined time-width (usually a little less than the_ \(\tau\) _period). Fig. 3(b) shows the passbands of the MRR for different operand inputs and temperature conditions. The MRR's temperature can be increased using the integrated microheater (Fig. 3(a)), to consequently tune its operand-independent resonance from its fabrication-defined initial position_ \(\eta\) _to its programmed position_ \(\kappa\) _(blue passband; Fig. 3(b)), relative to the input optical wavelength position_ \(\lambda_{in}\)_. For each bit combination at the operand terminals ((_i_,_w_) = (0,1), (1,0), or (1,1)), the MRR's resonance passband electro-refractively moves to an operand-driven position (red and magenta passbands in Fig. 3(b)). Based on the MRR resonance passband's programmed position_ \(\kappa\) _relative to_ \(\lambda_{in}\)_, the through-port transmission (T(_\(\lambda_{in}\)_)) of the MRR provides bit-wise logical XNOR operation between the input bits_ \(i\) _and_ \(w\)_._ _To validate the operation of our OXG, we performed the transient analysis, as shown in Fig. 3(c). For that, we modelled and simulated our OXG using the foundry-validated tools from Ansys/Lumerical's DEVICE, CHARGE, and INTERCONNECT suites [19]. Fig. 3(c) shows two input bit-streams \(I\) = \(\{i_{1}^{1},i_{2}^{1},..,i_{8}^{1}\}\) and \(W\) = \(\{w_{1}^{1},w_{2}^{1},..,w_{8}^{1}\}\) applied to the two PN junctions of our OXG at a DR = 10 GS/s. By looking at the output optical trace T(\(\lambda_{in}\)) in Fig. 3 (c), we can say T(\(\lambda_{in}\)) = \(\{i_{1}^{1}\odot w_{1}^{1},..,i_{8}^{1}\odot w_{8}^{1}\}\), which validates the functionality of our OXG as a logical XNOR gate. From our validation, our OXG has a full passband width at half maximum (FWHM) of 0.35 nm and it can operate at DR of up to 50 GS/s. Our XNOR gate consumes energy of 0.032nJ with an area footprint of 0.011mm\({}^{2}\)._ #### Iii-A2 Photo-Charge Accumulator (PCA) _From Section III-A, the XNOR vector bits generated by an array of OXGs are guided to a PCA circuit, where a bitcount is performed on the XNOR vector bits to generate an output result. Our PCA circuit employs a photodetector and two time integrating receiver (TIR) circuits_ [20] _(one of the TIR1 and TIR2 circuits remains redundant, enabled by the demux and mux; Fig 4). The photodetector generates a current pulse for each optical logic '1' incident upon it. The amplitude of a current pulse generated for an optical logic '0' remains under the noise limit; therefore, a logic '0' remains statistically undetected. The current pulse generated by an optical logic '1' accumulates a certain statistically significant amount of charge on the capacitor of the active TIR circuit (e.g., the circuit with C1 capacitor); as a result, the TIR circuit outputs a detectable analog voltage level [20]. Hence, when more optical '1's are incident upon the photodetector, the total accumulated charge on the active capacitor (e.g., C1), and thus, the accrued output analog voltage level, grows proportionally to the total number of optical '1's that are incident [20]. This is because a current source (a sequence of current pulses) can charge a capacitor linearly following this equation: \(\delta V\)=\(\frac{i\delta t}{C}\), where \(i\) is an incident current pulse, \(\delta t\) is the time-width of the current pulse, \(C\) is the capacitance, and \(\delta V\) is the accrued voltage. The final analog voltage accrued at the TIR output, thus, represents the bitcount result (accumulation result) of the incident optical '1's. However, the number of '1's that can be accumulated in such a manner might be limited, as the output of the TIR circuit (Fig. 4) might saturate. Once the output of a TIR circuit saturates, the ongoing accumulation phase ends and the bitcount result (i.e., the final TIR output voltage) is passed through a comparator to generate the activation value for the next BNN layer (as explained in Section II-A). After one accumulation phase, a discharge of the active capacitor (e.g., C1) is needed to prepare the circuit for the next accumulation phase. While capacitor C1 is discharging, the redundant TIR2 circuit with capacitor C2 mitigates the discharge latency by allowing a continuation of a concurrent bitcount. Fig. 3: (a) Schematic of our Optical XNOR Gate (OXG). (b) Spectral operation of OXG. (c) Transient analysis of OXG. ## IV Scalability Analysis and Mapping ### _Scalability of XNOR-Bitcount Processing Cores (XPCs)_ To determine the achievable size \(N\) for our XPC, we adopt scalability analysis equations (Eq. 3, Eq. 4, and Eq. 5) from [21] and [17]. Table I reports the definitions of the parameters and their values used in these equations. We considered Free Spectral Range (FSR=50nm) [21], FWHM=0.35nm (refer Section III-B), and inter-wavelength gap of 0.7nm. For these spectral conditions, we observed minimal crosstalk power penalty for the OXGs operating at DR=50GS/s (\(<\)1 dB penalty [22, 23, 24], which is accounted for as part of parameter \(IL_{penalty}\) in the equations (Table I)). Since the XPC of our OXBNN accelerator processes binarized vectors, it requires the bit precision of _B_=1-bit in the equations. We consider _M=N_ and first solve Eq. 3 and Eq. 4 for a set of DRs={3, 5, 10, 20, 30, 40, 50} GS/s, to find a corresponding set of \(P_{PD-opt}\). Then, we solve Eq. 4 for \(N\) with the obtained set of \(P_{PD-opt}\) values across the set of _DR_s. Table II reports the achievable \(N\) for our XPC across various _DR_s. As evident, the supported \(N\) value decreases from _N=66_ at 3 GS/s to _N=19_ at 50 Gs/s. This achievable \(N\) value defines the feasible number of OXGs per XPE; thus, this \(N\) also defines the maximum size of the XNOR vector slice that can be generated in our XPC. Because we consider FSR of 50nm and inter-wavelength gap of 0.7nm, we verify that the maximum _N_=66 can be supported within the FSR (i.e., _N_=66\(<\)(FSR/0.7nm)). \[B=\frac{1}{6.02}\Bigg{[}20log_{10}(\frac{R_{s}\times P_{PD-opt}}{\beta\sqrt{ \frac{DR}{\sqrt{2}}}}-1.76\Bigg{]} \tag{3}\] \[\beta=\sqrt{2q(R_{s}P_{PD-opt}+I_{d})+\frac{4kT}{R_{L}}+R_{s}^{2}P_{PD-opt}^{2 }RIN} \tag{4}\] \[\begin{split} P_{Laser}=\frac{10^{\frac{\eta_{WG}(dB)|N(d_{OX}) +d_{elemental}}{10}}M}{\eta_{SMF}\eta_{EC}\eta_{WPE}IL_{i/p-OX}}\times\frac{P_{ PD-opt}}{IL_{penalty}}\\ \times\frac{1}{(OBL_{OXG})^{N-1}(EL_{splitter})^{log2 M}}\end{split} \tag{5}\] Analysis of PCA's Accumulation Capacity: We modeled the photodetector (PD) of our PCA circuit using the INTERCONNECT tool from Ansys/Lumerical [19] for PD responsivity = 1.2 A/W across different \(P_{PD-opt}\) values corresponding to the \(N\) values in Table II. We extracted the current pulse values generated by the photodetector for the incident optical '1's and '0's corresponding to each \(P_{PD-opt}\). We then imported these values in our MultiSim [25] based model of the PCA with C1=C2=10pF [20], and the TIR gain=50. For these parameters, we simulated the analog output voltage at the PCA's TIR for different bitcount results (i.e., different values of the total number of accumulated '1's). From this analysis, we observed that the maximum number of '1's that can be accumulated by our PCA is limited by the available operating dynamic range of the TIR of our PCA. We considered the TIR's operating dynamic range to be 5V (0V to 5V) and evaluated our PCA's accumulation capacity \(\gamma\), which we define as the maximum number of '1's that can be accumulated by the PCA within the TIR's operating dynamic range. Our evaluated \(\gamma\) values, for each pair of \(N\) and corresponding \(P_{PD-opt}\), are reported in Table II. Since our PCA can accumulate a total of \(N\) bits and since each XNOR vector slice in our XPC has a total of \(N\) bits, our PCA can accumulate a total of \(\alpha\) XNOR vector slices, where \(\alpha\) = \(\frac{\gamma}{N}\). Table II also reports the values of \(\alpha\). As evident, the \(\gamma\) and \(\alpha\) values for our PCA can be very large, which provides several substantial benefits as discussed in Section IV-C. ### _Mapping Convolutions on an XPC_ As described in Section II-A, for processing a BNN convolution on hardware, both the weight and input channels are flattened into binarized vectors. For mapping of a binary Fig. 4: Photo-Charge Accumulator (PCA) Circuit. \(V_{REF}\) is the threshold required in the \(compare()\) function discussed in Section II-A. Typically, \(V_{REF}\) = 25V because we consider the dynamic range of TIR to be 5V. convolution on an XPC (or XPE), these binarized input and weight vectors are represented as matrices. For instance, the input matrix \(\mathbb{I}\)_(H, S)_ has \(H\) rows corresponding to \(H\) binarized input vectors of size \(S\) each. Similarly, the weight matrix \(\mathbb{W}\)_(H, S)_ can also be defined. These matrices \(\mathbb{W}\)_(H, S)_ and \(\mathbb{I}\)_(H, S)_ are mapped onto an XPC containing a total of \(M\) XPEs of size \(N\) each. Depending on the relation between \(S\) and \(N\), two cases drive the selection of the appropriate mapping. These cases and their corresponding mappings are illustrated in Fig. 5, _for M=2_, _H=2_, _N=9_, and two distinct values of \(S\). These cases are explained below: **Case 1, S=15, S\(>\)_N, Fig. 5(a) and 5(b)**: Matrices \(\mathbb{I}\) and \(\mathbb{W}^{\prime}\) consist of two vectors each, \(\{I_{1}\), \(I_{2}\}\) and \(\{W_{1}\), \(W_{2}\}\), respectively. To make the size _S=15_ of these vectors \(\{I_{1}\), \(I_{2}\}\) and \(\{W_{1}\), \(W_{2}\}\) amenable to the XPE size _N_=9, each of these vectors is split into two slices to yield a set of input vector slices \(\{I_{1}^{1}\), \(I_{1}^{2}\), \(I_{2}^{1}\), \(I_{2}^{2}\)\(\}\) and a set of weight vector slices \(\{W_{1}^{1}\), \(W_{1}^{2}\), \(W_{2}^{1}\), \(W_{2}^{2}\)\(\}\). Since _M=2_ is less than the total number of vector slices (i.e., \(H\!\times\!ceil(S/N)\) = 4), multiple passes are required to complete the processing of these vector slices. Mappings of these vector slices differ between our PCA and the bitcount circuit from prior works [8] and [7], as discussed next. **Mapping for the bitcount circuit from [8] and [7] (Fig. 5(a))**: Since M=2, there are two XPEs, namely XPE 1 and XPE 2. During PASS 1 of these XPEs (the definition of a PASS is given in Section III-B), we map \(\{I_{1}^{1}\), \(W_{1}^{1}\}\) onto XPE 1, and \(\{I_{1}^{2}\), \(W_{1}^{2}\}\) onto XPE 2. XPE 1 generates the corresponding XNOR vector, which is accumulated using the bitcount circuit to produce _psum_\(I_{1}^{1}\!\odot\!W_{1}^{1}\). Similarly, XPE 2 generates _psum_\(I_{1}^{2}\!\odot\!W_{1}^{2}\). The generated _psums_ are reduced (further accumulated) at the _psum_ reduction network, to produce Final Result 1. Similarly, during PASS 2, vector slices \(\{I_{2}^{1}\), \(I_{2}^{2}\), \(W_{2}^{1}\), \(W_{2}^{2}\)\(\}\) are mapped to generate corresponding _psums_, which are then sent to the _psum_ reduction network to produce Final Result 2. Thus, for the bitcount circuits from prior works, there is a need for employing a _psum_ reduction network, which leads to a high latency overhead. **Mapping for our OXBNN with PCAs, Fig. 5(b)**: Our OXBNN maps all the slices of a particular vector to the same XPE. During PASS 1, OXBNN maps \(\{I_{1}^{1}\), \(W_{1}^{1}\}\) to XPE 1, and \(\{I_{2}^{1}\), \(W_{2}^{1}\}\) to XPE 2. XPE 1 charges its PCA's capacitor to generate an analog voltage level that represents _psum_\(I_{1}^{1}\!\odot\!W_{1}^{2}\), whereas XPE 2 charges its PCA's capacitor to generate an analog voltage level that represents _psum_\(I_{2}^{1}\!\odot\!W_{2}^{1}\). Because a PCA can accumulate a total of \(\alpha\) vector slices (Section III-B2), the PCAs of XPE 1 and XPE 2 can be made to hold the charge and analog voltage accrued during PASS 1. Then, during PASS 2, XPE 1 and XPE 2 can further grow these held analog voltage levels by the amounts proportional to \(I_{1}^{2}\!\odot\!W_{1}^{2}\) and \(I_{2}^{2}\!\odot\!W_{2}^{2}\), respectively. Thus, at the end of PASS 2, the total accrued analog voltage on the PCA of XPE 1 (XPE 2) would be proportional to \(I_{1}^{1}\!\odot\!W_{1}^{1}\) + \(I_{1}^{2}\!\odot\!W_{1}^{2}\) (\(I_{2}^{2}\!\odot\!W_{2}^{2}\) + \(I_{2}^{2}\!\odot\!W_{2}^{2}\)). Thus, the PCAs of our OXBNN can accumulate multiple _psums_ (a total of \(\alpha\)_psums_) inherently. This eliminates the need to employ _psum_ reduction networks to consequently yield substantial benefits, as further explained in Section IV-C. **Case 2, S=9, S\(\leq\)_N, Fig. 5 (c)**: The size _S=9_ of the vectors \(\{I_{1}\), \(I_{2}\}\) and \(\{W_{1}\), \(W_{2}\}\) matches with the XPE size _N_=9_. Thus, in a single pass (PASS 1), our OXBNN maps \(\{I_{1}\), \(W_{1}\}\) to XPE 1, and \(\{I_{2}\), \(W_{2}\}\) to XPE 2. XPE 1 and XPE 2 produce Final Result 1 and Final Result 2 corresponding to \(I_{1}\!\odot\!W_{1}\) and \(I_{1}\!\odot\!W_{1}\), respectively. In this case, the mapping is identical for our PCA and the bitcount circuits from prior work. ### _Latency and Energy Benefits of PCA_ Our PCA provides manifold benefits in terms of both the latency and energy consumption. The latency benefits accrue because our PCA eliminates the need of employing _psum_ reduction networks to temporarily store and accumulate _psums_. Fig. 5: Example mappings and related operation of our XPC for various cases of the \(S\) and \(N\) values. A comparison of our PCA with the bitcount circuit from prior works is also illustrated. From Section IV and Table II, our PCA can achieve \(\gamma\)=8503 and \(\alpha\)=447 at \(DR\)=50 GS/s, which means that our PCA, before it saturates, can accumulate a total of \(\gamma\)=8503 '1's across a total of \(\alpha\)=447 XNOR vector slices. As a result, if we operate the OXGs of our OXBNN at \(DR\)=50 GS/s, our PCA can inherently accumulate (perform bitcount on) any XNOR vector whose size \(S\) is less than \(\gamma\)=8503. Since the maximum XNOR vector size is observed to be \(S\)=4608 across all major modern CNNs (e.g., ResNet18, ResNet50, DenseNet121, VGG16, VGG19, GoogleNet, Inception_V3, EfficenficentNet_B7, NASNetMobile, MobileNet_V2, and ShuffleNet) [26], our PCA eliminates the need to employ dedicated _psum_ reduction networks in our OXBNN accelerator. ## V Evaluation ### _System-Level Implementation of OXBNN._ Fig. 6 illustrates the system-level implementation of our OXBNN accelerator. It consists of global memory that stores BNN parameters and a pre-processing and mapping unit. It has a mesh network of tiles. Each tile contains 4 XPCs interconnected (via H-tree) with an output buffer as well as pooling units. ### _Simulation Setup_ For evaluation, we model our OXBNN accelerator from Fig. 6 using our custom-developed, transaction-level, event-driven python-based simulator ([https://github.com/uky-UCAT/B_ONN_SIM](https://github.com/uky-UCAT/B_ONN_SIM)). We simulated the inference of four BNNs (batch size=1): VGG-small [9], ResNet18 [27], MobileNet_V2 [28], and ShuffleNet_V2 [29]. We binarized all the weights and inputs using the LQ-Nets technique [9]. We evaluate frames-per-second (FPS) and FPS/W (energy efficiency). We compared our OXBNN with ROBIN [8] and LIGHTBULB [7]. ROBIN and LIGHTBULB operate at different DRs; therefore, we consider two variants of our OXBNN: (1) OXBNN_5 with DR=5GS/s (matching with ROBIN) and _N=53_ (Table II), (2) OXBNN_50 with DR=50GS/s (matching with LIGHTBULB) and _N=19_ (Table II). We consider two variants of ROBIN: ROBIN Energy-Optimized (ROBIN_EO) and ROBIN Performance-Optimized (ROBIN_PO) [8]. For fair comparison, we perform area proportionate analysis, wherein we altered the XPE count for each photonic BNN accelerator across all of the accelerator's XPCs to match with the area of OXBNN_5 having 100 XPEs. Accordingly, the scaled XPE counts of OXBNN_50 (_N=19_), ROBIN_PO (_N=50_), ROBIN_EO (_N=10_), and LIGHTBULB (_N=16_) are 1123, 183, 916, and 1139, respectively. Table III gives the parameters used for our evaluation. ### _Evaluation Results_ Fig. 7(a) compares FPS values (log scale). OXBINN_50 achieves 62\(\times\), 8\(\times\), and 7\(\times\) better FPS than ROBIN_EO, ROBIN_PO, and LIGHTBULB, respectively, on gmean across the BNNs. Similarly, OXBNN_5 also outperforms ROBIN_EO, ROBIN_PO, and LIGHTBULB by 54\(\times\), 7\(\times\), and 16\(\times\), respectively, on gmean across the BNNs. Similarly, our accelerator OXBNN_50 also outperforms ROBIN_EO, ROBIN_PO, and LIGHTBULB by 4.9\(\times\), 5.5\(\times\), and 1.5\(\times\), respectively, on gmean across the BNNs. The energy benefits of OXBNN_5 and OXBNN_50 are due to the novel OXGs. Due to their single-MRR design, these OXGs consume less energy and static power, compared to the OXGs (containing at least two MRRs or microdisks per OXG) from ROBIN and LIGHTBULB. Moreover, the elimination of the dedicated _psum_ reduction network (Section IV-C) also eliminates related high energy consumption. Thus, these benefits collectively render better FPS/W for OXBNN_5 and OXBNN_50. Fig. 7: (a) FPS (log scale) (b) FPS/W for OXBNN versus ROBIN and LIGHTBULB accelerators. Fig. 6: System-level overview of our OXBNN accelerator. ## VI Conclusions In this paper, we present a single-MRR-based optical XNOR gate (OXG) and a novel bitcount circuit Photo-Charge Accumulator (PCA). We employ OXGs and PCAs to forge a novel accelerator, called OXBNN, to process the inferences of BNNs. We performed a comprehensive analysis to show the throughput and energy efficiency advantages of OXBNN. Our evaluation results show that OXBNN provides improvements of up to 62\(\times\) and 7.6\(\times\) in throughput (FPS) and energy efficiency (FPS/W), respectively, on geometric mean over two state-of-the-art photonic BNN accelerators from prior works. ## Acknowledgments We thank the anonymous reviewers whose valuable feedback helped us improve this paper. We would also like to acknowledge the National Science Foundation (NSF) as this research was supported by NSF under grant CNS-2139167.
2308.13735
MST-compression: Compressing and Accelerating Binary Neural Networks with Minimum Spanning Tree
Binary neural networks (BNNs) have been widely adopted to reduce the computational cost and memory storage on edge-computing devices by using one-bit representation for activations and weights. However, as neural networks become wider/deeper to improve accuracy and meet practical requirements, the computational burden remains a significant challenge even on the binary version. To address these issues, this paper proposes a novel method called Minimum Spanning Tree (MST) compression that learns to compress and accelerate BNNs. The proposed architecture leverages an observation from previous works that an output channel in a binary convolution can be computed using another output channel and XNOR operations with weights that differ from the weights of the reused channel. We first construct a fully connected graph with vertices corresponding to output channels, where the distance between two vertices is the number of different values between the weight sets used for these outputs. Then, the MST of the graph with the minimum depth is proposed to reorder output calculations, aiming to reduce computational cost and latency. Moreover, we propose a new learning algorithm to reduce the total MST distance during training. Experimental results on benchmark models demonstrate that our method achieves significant compression ratios with negligible accuracy drops, making it a promising approach for resource-constrained edge-computing devices.
Quang Hieu Vo, Linh-Tam Tran, Sung-Ho Bae, Lok-Won Kim, Choong Seon Hong
2023-08-26T02:42:12Z
http://arxiv.org/abs/2308.13735v1
# MST-compression: Compressing and Accelerating Binary Neural Networks ###### Abstract Binary neural networks (BNNs) have been widely adopted to reduce the computational cost and memory storage on edge-computing devices by using one-bit representation for activations and weights. However, as neural networks become wider/deeper to improve accuracy and meet practical requirements, the computational burden remains a significant challenge even on the binary version. To address these issues, this paper proposes a novel method called Minimum Spanning Tree (MST) compression that learns to compress and accelerate BNNs. The proposed architecture leverages an observation from previous works that an output channel in a binary convolution can be computed using another output channel and \(\mathrm{XNOR}\) operations with weights that differ from the weights of the reused channel. We first construct a fully connected graph with vertices corresponding to output channels, where the distance between two vertices is the number of different values between the weight sets used for these outputs. Then, the MST of the graph with the minimum depth is proposed to reorder output calculations, aiming to reduce computational cost and latency. Moreover, we propose a new learning algorithm to reduce the total MST distance during training. Experimental results on benchmark models demonstrate that our method achieves significant compression ratios with negligible accuracy drops, making it a promising approach for resource-constrained edge-computing devices. ## 1 Introduction Deep Neural Networks (DNNs) have been widely applied in many artificial intelligence applications, especially vision tasks with high accuracy [27, 14]. However, the cost of computation and massive storage burden is significantly challenging to deploy DNNs on embedded systems such as mobile devices and other resource-constrained platforms. Many approaches have been proposed and demonstrated their effectiveness in reducing energy and resources while maintaining the deep models' accuracy, including pruning [12], quantization [7], distillation [15], and efficient hardware implementation [6]. Among these methods, quantization with less bit-width for parameter and activation representation is widely used due to its great benefits and possibility in most practical applications. Binarized neural network is a particular form of the quantization method in which weight values and activations are converted to 1-bit values. Accordingly, multiplications and accumulation operations can be replaced by \(\mathrm{XNOR}\) and \(\mathrm{Popcount}\) operations [7], respectively. In addition, batch normalization is simplified to a threshold comparison [31], while the pooling can be performed with OR operations [30]. Consequently, the compression ratio on memory storage and computational cost is significantly improved, leading to remarkable performance acceleration. Nevertheless, the accuracy degradation is the trade-off of minimizing the bit-width as BNN. Thus, most previous works focus on reducing this accuracy gap [25, 4, 33, 22, 20, 33, 23]. Meanwhile, compressing BNNs has not received much attention, with only a few prominent methods [32, 9, 31, 17], where the kernel compression [9, 31] gives impressive results with roughly \(50\)% resources reduction. In particular, given a binary convolution, inspired by an observation that an output channel can be calculated using another out Figure 1: Illustration of kernel compression for BNNs with the K-mean method (a) and the shortest Hamiltonian path (b), in which adding red connections can further reduce the computational cost for both of them. put channel and outputs of \(\mathrm{XNOR}\) operations using weights that respectively differ from the reused channel's weights [9], authors in [31, 9] proposed to construct a fully connected graph, in which each vertex corresponds to an output channel, and distance between two vertices is the number of different values between two weight sets used for the two output channels. Then, based on the graph, they use the K-mean cluster and shortest Hamiltonian path to reorder the convolution output calculation, aiming to reduce the number of \(\mathrm{XNOR}\) operations. However, these approaches have yet to fully minimize the computational cost as output channels can use less computation. Indeed, Figure 1 (a) illustrates an example of the K-mean method, where two vertices are considered centers of two groups. Two output channels corresponding to centers are fully calculated1, while other vertices reuse their centers for calculation. However, adding a connection between the two centers would enable one to utilize the rest with less computation cost. Additionally, in the shortest Hamiltonian path shown in Figure 1 (b), there may exist a connection to a specific vertex that is shorter than the connection from the preceding vertex in the path, leading to a lower required number of operations for this vertex. Furthermore, time complexity poses a challenge in these methods [1, 13], resulting in longer exploration times. Footnote 1: calculated with \(C_{in}\times M\times M\)\(\mathrm{XNORs}\), denoted in Figure 2 To more effectively minimize the number of \(\mathrm{XNOR}\) operations with the mentioned observation, in all connections to a specific vertex, the minimum connection must be selected to minimize the computation cost for this vertex. In addition, all of these selected connections must be included in a subgraph that visits all vertices exactly once to ensure that only one output channel is fully calculated. As a result, a MST is the only structure that can fulfill these requirements. Therefore, this paper leverages the MST to reorder binary convolution output calculations. Figure 2 shows a simple example of reordering calculation on a convolution. In this example, only the output channel 4 is fully calculated with \(C_{in}\times M\times M\) bit-weight values. In contrast, other channels are calculated using output channel \(4\), following the MST direction. On the other hand, to maximize the advantages of the MST, we further minimize the MST distance of all convolution layers with a new learning algorithm right during the training stage. Besides, we propose a hardware accelerator for BNNs applying the MST compression to demonstrate the feasibility and effectiveness of the method related to hardware resources. To the best of our knowledge, this method gives the highest compression ratio with implementation from learning for compression to acceleration. In summary, the following are contributions of this paper: * We introduce and analyze the effectiveness of the MST in reducing the computational cost for the inference on a binary convolution layer. * We propose a training algorithm that can reduce the MST distance and depth right during the training process, which consequently maximizes the compression ratio for inference implementation. * We provide the corresponding hardware acceleration for the proposed method with high throughput and better resource efficiency, compared to related works [3, 30, 31, 10]. The experiments are performed for BNNs including VGG-small [5], ResNet-18/20 [14] on CIFAR-10 dataset [18], and ResNet-18/34 [14] on ImageNet [26]. The results show that the proposed approach gives a higher compression ratio than the previous works [32]. Compared to the baseline [33], our method reduces up to \(13.5\times\) the convolution parameters and \(5.51\times\) bit-wise operations on the same model with an acceptable accuracy degradation. Regarding hardware acceleration, we conduct the experiments for a BNN with the same structure as the baseline [31] and apply the proposed approach. Compared to the baseline [31], hardware deployments demonstrate that BNNs with our method save \(1.8\times\) LUTs, and \(1.81\times\) area efficiency while maintaining the acceleration speed and accuracy. ## 2 Related Work Training neural networks with binary weights and activations were first introduced by Coubariaux [7] and then rapidly gained attention from the community due to the high compression ratio. However, accuracy drop is a critical problem of this direction, while compression is still highly demanded. Various techniques are proposed to further compress and improve accuracy based on the binary property. Figure 2: The process of arranging the order of computation on a binary convolution layer, in which output channel 4 is fully calculated first. Then, the output channel 4 is used to calculate channel 1, channel 2 and channel 3. For accuracy improvement, Rastegari [25] proposed adding a scaling factor to the convolution to reduce the quantization error. To generalize this idea aiming to enhance accuracy, the authors in [4] extended this factor dimension and enabled its training option as a learnable parameter. Besides, instead of using the Straight Through Estimation (STE) method [2], Liu [23], and Lin [20] proposed using a piece-wise linear and training-aware approximation function, respectively, to approximate the sign function gradient better. Meanwhile, some previous works introduced both new optimization techniques and corresponding BNN to shrink the accuracy gap entirely [22, 21, 28, 24]. For example, based on Bi-real Net in [23], authors in [22] proposed new sign and activation functions to shift and reshape activation distribution with a new baseline neural network model that can boost the performance. Other works look at the binary characteristics from deeper perspectives to limit information loss [20, 28, 33]. For example, KL divergence is employed in [28] to minimize the difference between the distribution of binary and real-weight counterparts. Xu [33] observed the low probability of changing large full-precision weight when binarizing and proposed the ReCu function to revive the death weights, leading to lower quantization error. Binary model compression has received less attention as only a few techniques have been proposed so far. Specifically, Lee [19] proposed an encryption algorithm to compress binary weights with lower than 1-bit per weight, and XOR-gate networks are used to decrypt the inference task. Similarly, Sub-bit Neural Networks (SNN) [32] proposed using lower than \(9\) bits for each \(3\times 3\) binary kernel based on the scattered \(9\)-bit kernel distribution. Other proposed compression methods provided only hardware techniques for inference implementation. Authors in [31, 9] proposed BNN hardware architectures using weight/input reuse with different calculation orders for convolution to reduce hardware overhead. Kim [17] introduces kernel decomposition that can reduce the computational cost by sharing one base output for all channels. ## 3 Background This section briefly describes how to construct a binary convolution layer and the related optimization methods. For a convolution layer with \(C_{in}\) input channels, \(C_{out}\) output channels, and kernel size \(M\), weights and activations are denoted by \(\mathcal{W}^{r}\in\mathbb{R}^{n}\) and \(A^{r}\in\mathbb{R}^{m}\), where \(n=C_{out}\times C_{in}\times M\times M\) and \(m=C_{in}\times W_{in}\times H_{in}\). \(W_{in}\times H_{in}\) represents input feature map size. To simplify the computation and reduce memory storage, in a binary convolution layer, the binarization of activations \(\mathcal{A}^{b}\in\{\pm 1\}^{m}\) and weights \(\mathcal{W}^{b}\in\{\pm 1\}^{n}\) is acquired by the following sign function [25], \[x^{b}=\mathrm{Sign}(x)=\begin{cases}+1,\;\text{if}\;x\geq 0,\\ -1,\;\text{otherwise}.\end{cases} \tag{1}\] If \(-1\) is substituted by \(0\) to represent a negative value for \(\mathcal{A}^{b}\) and \(\mathcal{W}^{b}\), the output convolution operation of the \(i^{th}\) channel now is reformulated as in Eq. (2) [31]. \[Y_{i}=(2\sum_{j=1}^{C_{in}MM}\mathrm{XNOR}(\mathcal{A}^{b}_{ij},\mathcal{W}^{ b}_{ij})-C_{in}\times M\times M)\odot\alpha, \tag{2}\] where \(\sum_{1}^{C_{in}\times M\times M}\mathrm{XNOR}(\mathcal{A}^{b}_{ij},\mathcal{W }^{b}_{ij})\) is the number of set-bits (\(\mathrm{Popcount}\)) of the output channel \(i^{th}\), \(\odot\) is the element-wise multiplication, and \(C_{in}\times M\times M\) is the number of bit-wise \(\mathrm{XNOR}\) operations used to calculate one output channel. The scaling factor \(\alpha\) is added as a learnable parameter to lessen quantization error [25]. For backward propagation, following [23], the STE [2] method is used for the approximation gradient on weights, while the piece-wise polynomial function [23] is used for gradient approximation on activations. To further compress the BNN models, in this paper, we apply the weight reuse method [31] for the inference task. Accordingly, given the \(l^{th}\) binary convolution, as shown in Figure 3, we denote \(P_{i}\) and \(P_{j}\) as the output of the \(\mathrm{Popcount}\) operations of the output channels \(i^{th}\) and \(j^{th}\), respectively. All binary weights of the layer are divided into \(C_{out}\) weight sets \(w^{li}_{b}\in\{\pm 1\}^{C_{in}\times M\times M}\), where \(i=1,2,...,C_{out}\) and each weight set is used to calculate for an output channel. \(d_{ij}\) is the number of weight bits in the \(w^{li}_{b}\) that differ from the \(w^{lj}_{b}\), compared one-to-one respectively. \(P_{ij}\) is the sum of \(\mathrm{XNOR}\) operations with the Figure 3: Weight-reuse method description. Here, the red line is counting the number of elements, the green line is summing all elements. \(P_{12}\) is \(\sum_{j=1}^{d_{12}}\mathrm{XNOR}(\mathcal{A}_{2j},\mathcal{W}_{2j})\), and \(Y_{2}\) is \(2(P_{1}-d_{12}+2P_{12})-C_{in}\times M\times M\). weights of the channel \(j^{th}\) that differ from the \(i^{th}\) counterpart. To calculate the output channel \(j^{th}\) from the output channel \(i^{th}\), we use Eq. (3). \[Y_{j}=2(P_{i}-d_{ij}+2P_{ij})-C_{in}\times M\times M. \tag{3}\] ## 4 Methodology ### Minimum Spanning Tree for BNN Acceleration For a binary convolution layer, we construct a fully connected graph in which each vertex corresponds to an output channel. The distance between two vertices \(i\) and \(j\) is \(d_{ij}\). As shown in Eq. (3), an output channel \(j\) can be calculated via the \(\mathrm{Popcount}\) of an output channel \(i\) and the \(\mathrm{Popcount}\) of \(d_{ij}\)\(\mathrm{XNOR}\) operations. That means the number of \(\mathrm{XNOR}\) operations executed for the output channel \(j\) is the distance \(d_{ij}\). Based on this observation, to minimize the number of \(\mathrm{XNOR}\) operations used for all output channels, we need a sub-graph that includes all vertices with the minimum total edge distances, which is named an MST [29]. Returning to the example in Figure 2, Figure 4 illustrates the computation process of this convolution example with the standard and proposed approach. Specifically, in the standard direction, each output channel needs \(9\,\mathrm{XNOR}\) operations, while in our direction, only the \(4^{th}\) output channel is fully computed by \(9\,\mathrm{XNOR}\) operations, and the \(1^{st}\), \(2^{nd}\), \(3^{rd}\) output channels reuse the output channel \(4^{th}\) with \(2\), \(3\), \(2\)\(\mathrm{XNOR}\) operations, respectively. Accordingly, the compression ratio for this example is (\(2+3+2+3\times 3\))/(\(4\times 1\times 3\times 3\)) \(=0.44\). In general, we can reduce the computational cost for a convolution layer with the following ratio: \[\mathcal{R}=\frac{\sum_{j\neq root}^{C_{out}}d_{ij}+C_{in}\times M\times M}{C_{ out}\times C_{in}\times M\times M}, \tag{4}\] where \(i\) is the parent of the vertex \(j\). If \(j\) is the root of the MST, this output uses \(C_{in}\times M\times M\) binary weight values. Regarding parameters, because the streaming architecture described in Sec. 4.3 is fully implemented with all layers on the FPGA platform, weights' locations can be converted to connections from inputs to the correct \(\mathrm{XNORs}\) and corresponding weight values at the circuit level. Thus, our proposed method counts only the parameters we need for \(\mathrm{XNOR}\) logic gates, as shown in Figure 4. **Number of bit-wise operations**. Compared with the previous approaches [31, 9], in a binary convolution layer, the proposed method can better reduce the number of \(\mathrm{XNORs}\) and parameters. Firstly, as explained in the previous section, adding connections among centers leads to less computation for the K-mean approach [31] as in Eq. (5). \[\sum_{j}d_{ij}+C_{in}\times M\times M<\sum_{j\not\in C}d_{ij}+R\times C_{in} \times M\times M, \tag{5}\] where \(C\) includes all centers, \(R\) is the number of centers. Furthermore, adding connections among centers also creates a spanning tree, whose computational cost is always equal to or higher than using an MST. Secondly, in [9], the authors proposed using the shortest Hamiltonian path to arrange the computation order. Because this path visits all vertices exactly once, the shortest Hamiltonian path is also a spanning tree. Therefore, the number of \(\mathrm{XNOR}\) bit-wise operations using an MST method is equal to or lower than the shortest Hamiltonian path. **Exploration time**. The MST is faster and less complex for finding the computation order, compared to previous work [9, 31]. Indeed, the MST is found by Prim's algorithm with the complexity of \(O(V^{2})\)[29], where \(V\) is the number of vertices. In contrast, the K-mean algorithm requires \(O(V^{2}G)\)[13] time complexity, where \(G\) denotes the number of groups. Since the optimal number of groups varies Figure 4: Comparison between the computation process of a standard binary convolution and our method in BNN acceleration. Instead of fully computing all \(C_{in}\times M\times M\)\(\mathrm{XNOR}\) operations for each layer, our method provides an optimal order of the calculation using the MST to minimize the number of \(\mathrm{XNOR}\) operations and total parameters. across convolutions, extra time is needed to determine this number for each case. Meanwhile, the shortest Hamiltonian path has a much higher time complexity of \(O(V^{2}2^{V})\)[1], making it challenging to apply in practical settings. To estimate these approaches for practical operation, we perform experiments for the first \(6\) binary convolution layers of ResNet-20 [14] with the weight data trained by [33], while experiments on deeper layers with a larger number of output channels are infeasible even uncompleted for the K-mean and the shortest Hamiltonian path approach due to their longer running times. According to Table 1, the MST approach gives the lowest bit-Ops and exploration time. **Computation latency**. This is one of the factors affecting the resource overhead. This paper focuses on improving the architecture in [31], a streaming acceleration for BNNs. In doing so, the deeper the computation tree, the longer latency between the time output of the first and the last channel comes out. Consequently, the number of added registers (Flip-flops) for synchronization tends to be equal or more for the deeper tree because all output channels must be available at the same clock for the next convolution operation. The exact number of added registers depends on the number of adders executed in a clock period, which is affected by the required frequency, design complexity, and hardware platform. Thus, to evaluate the comprehensive effects of the depth with practical implementations, the experiments related to this comparison are provided in Sec. 5. Figure 5 visually describes the depth of the MST, K-mean cluster, and the shortest Hamiltonian path approach. Accordingly, the depths of the K-mean method and the shortest Hamiltonian path are always \(1\) and \(V\), respectively. Meanwhile, the MST's depth can range from \(1\) to \(V\), depending on the graph structure. Therefore, the proposed method is always better than the shortest Hamiltonian path in all aspects (number of operations, latency, running time). Meanwhile, the K-mean method gives a better latency but much worse for the rest. After exploring an MST for the post-training stage to limit the impact of the tree depth on latency and resources, we continue finding a new root that makes the depth of the new tree minimum. ### Learning Optimization According to the observation mentioned earlier, to enhance the benefits of the MST approach, we propose a simple learning method that can further reduce the MST depth and total MST distance over a BNN model. Specifically, to make the training converged, instead of directly optimizing the MST, we cluster weight sets (\(\mathrm{w}_{b}^{li}\)) to different groups and try to reduce the distance among weight sets in each group by optimizing the distances from all weight sets to fixed centers. In doing so, given the \(l^{th}\) binary convolution layer of a BNN model, we first randomly sample a binary center subset \(\mathbb{C}_{l}\subset\{\pm 1\}^{C_{in}\times M\times M}\) and \(|\mathbb{C}_{l}|=N_{l}\). Then, we explore the nearest center in \(\mathbb{C}_{l}\) for every \(\mathrm{w}_{b}^{li}\) and save it into a dictionary \(\mathcal{C}\) using Eq. (6). \[\mathcal{C}(\mathrm{w}_{b}^{li})=\operatorname*{argmin}_{x}\left(\sum_{i=1}^{ C_{in}\times M\times M}\|x-\mathrm{w}_{b}^{li}\|\right);x\in\mathbb{C}_{l}. \tag{6}\] On the other hand, the total distance from every weight sets to its center is added to the loss function and compels the training to minimize both the cross-entropy loss and the distance among weight sets in a group (distance loss). To strike a balance between optimizing the original and distance loss, we adjust a hyper-parameter \(\lambda\) to achieve an optimal ratio for different BNN models. Besides, a \(\gamma\) parameter serves as a hyper-parameter to control the priority of the original and additional loss during the training. This parameter varies following a specific scheduler during the training process. Particularly, the algorithm prioritizes reducing the MST distance and depth during the initial training phase and gradually shifting its focus toward accuracy at the end of the process. The new loss function is shown in Eq. (7), where \(\mathcal{L}_{0}(\mathrm{input},\mathrm{w})\) is the original loss function. Forward processes are summarized in Algorithm 1. \[\mathcal{L}(\mathrm{input},\mathrm{w})=\mathcal{L}_{0}(\mathrm{input}, \mathrm{w})+\lambda\gamma\sum_{i}\|\mathcal{C}(\mathrm{w}_{b}^{i})-\mathrm{w} _{b}^{i}\|^{2}, \tag{7}\] It is noticeable that we use pre-trained BNN models for the learning process to avoid diverging during distance op \begin{table} \begin{tabular}{c|c c|c c|c c} \hline \hline \multirow{2}{*}{**L**} & \multicolumn{2}{c|}{**Hamiltonian**} & \multicolumn{2}{c|}{**K-mean**} & \multicolumn{2}{c}{**MST**} \\ & Bit-Ops & Time & Bit-Ops & Time & Bit-Ops & Time \\ \hline 1 & 1.08 M & 31.3 s & 1.33 M & 0.090 s & 1.07 M & 0.021 s \\ 2 & 1.09 M & 32.9 s & 1.30 M & 0.075 s & 1.06 M & 0.021 s \\ 3 & 1.07 M & 32.6 s & 1.30 M & 0.085 s & 1.05 M & 0.023 s \\ 4 & 1.10 M & 32.7 s & 1.33 M & 0.076 s & 1.08 M & 0.021 s \\ 5 & 1.11 M & 32.5 s & 1.32 M & 0.115 s & 1.09 M & 0.020 s \\ 6 & 1.09 M & 32.5 s & 1.33 M & 0.086 s & 1.08 M & 0.021 s \\ \hline \hline \end{tabular} \end{table} Table 1: Bit-Ops and running time w.r.t. different approaches for reordering calculation on the first \(6\) convolution layers of ResNet-20 with \(16\) output channels. Figure 5: a) The computation tree’s depth using the K-mean cluster. b) The computation tree’s depth using the shortest Hamiltonian path. c) The computation tree’s depth using the MST. timization time. In this paper, results from ReCu [33] are used and considered the baseline for comparison. ### Streaming Acceleration Inspired by the streaming architecture in [31], this paper proposes an entire BNN high-speed architecture for BNN models applying our compression method. Specifically, the hardware architecture of a binary convolution layer is shown in Figure 6, in which the process is divided into two steps: 1) loading the input feature map to a line buffer and 2) executing \(\mathrm{XNOR}+\mathrm{Popcount}\) operation. **Loading input.** Given a binary convolution layer with \(M=3\), padding size \(=1\), firstly, \(C_{in}\) input feature maps are gradually loaded into \(C_{in}\) line buffers with the size \(2\times W_{in}+3\). When \(W_{in}+2\) registers in each line buffer are filled, \(C_{in}\times 3\times 3\) input pixels are selected from the line buffers for the convolution (\(\mathrm{XNOR}+\mathrm{Popcount}\)) operation. In standard calculation approach, the number of \(\mathrm{XNOR}\) operations used for a binary convolution is \(C_{out}\times C_{in}\times M\times M\). These operations can be separated into \(C_{out}\) sub-operations corresponding to \(C_{out}\) output channels; each sub-operation includes \(C_{in}\times M\times M\)\(\mathrm{XNOR}\) bit-operations and requires \(C_{in}\times M\times M\) corresponding input pixels and weight values. In this proposed architecture, the difference is that the number of inputs for a particular sub-operation is lower than \(C_{in}\times M\times M\) and equal to the distance between the corresponding vertex and parent vertex as in the constructed graph, except the sub-operation for the root vertex requiring full calculation with \(C_{in}\times M\times M\)\(\mathrm{XNOR}\)s. **Perform convolution.** This step executes the convolution operation with the order of calculation following the MST. As shown in Figure 6, the output of a specific PE (red arrow) is reused for the next PE to reduce computational cost. The PE for the output channel \(i^{th}\) needs \(d_{ij}\) weight values and executes \(d_{ij}\)\(\mathrm{XNOR}\) operations with the corresponding \(\mathrm{Popcount}\) operation, where \(j\) is the parent vertex of the vertex \(i^{th}\). Besides, to eliminate memory for convolution weight storage, \(\mathrm{XNOR}\) operations are simplified to NOT gate or straight connection. In terms of timing violations, we pipeline the \(\mathrm{Popcount}\) operation by adding register lines among the adder tree to reduce computation complexity in a clock cycle. The exact number of register lines depends on the MST depth, frequency and hardware platform. Thus, depending on practical implementations, we adjust this number to meet the timing requirements. ## 5 Experiments In this section, we conduct experiments including software and hardware implementation to show the effectiveness of the proposed method for BNN compression. Training experiments are executed with the popular datasets CIFAR-10 and ImageNet via Pytorch, while hardware implementation is performed on the Xilinx FPGA platforms. ### Learning and hardware implementation **Training implementation**: Because our proposed method is independent of other BNN accuracy optimization methods, in this paper, we select training results from the ReCU approach in [33] as a case study for further compressing. In doing so, VGG-small, ResNet-18/20 models are used to evaluate on CIFAR-10 dataset, while ResNet-18/34 are used for ImageNet. Regarding training in detail, we keep the configuration as in [33], except for the learning rate of 0.1. In addition, the \(\lambda\) value is adjusted depending on the training models, datasets, and focused objectives. Meanwhile, the \(\gamma\) parameter is scheduled following the learning rate. In this paper, when training with CIFAR-10, we select 1e-6 for ResNet-18, 5e-6 for ResNet-20, and 4e-6 for VGG-small. Meanwhile, for ImageNet, we select 1e-6 for ResNet-18 and 1e3 for ResNet-34. Figure 6: High-speed hardware accelerator for a binary convolution layer, where the \(\mathrm{XNOR}\)s are simplified with \(\mathrm{NOT}\) or simple straight connection, while computation on each PE corresponding to each output channel is executed gradually with the MST. **Hardware implementation**: The inference hardware architecture is implemented on a Xilinx FPGA platform called xczu3eg-sbva484 part. Before programming to FPGA, the design is run with the corresponding C/C++ model and simulated via Synopsys VCS. Finally, Vivado 2018.3 is used for the design synthesis and implementation. ### Ablation Studies In this section, experimental results and discussion related to hyper-parameter \(\lambda\), and \(\gamma\) are described. **Effect of hyper-parameter \(\gamma\)**: As explained in Sec. 4.1, an additional loss related to MST distance is added for the learning. The effect of the loss depends on the \(\gamma\), while the \(\gamma\) changes following the epoch. To facilitate the evaluation, Figure 7 provides MST distance curves for all binary convolution layers and training-validation accuracy curves in the entire learning process. In the first half-time, the \(\gamma\) is still significant, and the training focuses on MST distance optimization, while the accuracy optimization is temporarily less affected. Consequently, the MST distance of all binary convolutions is significantly down, especially for the deeper layers with more channels, while the training accuracy suddenly decreases and the validation accuracy goes down and strongly fluctuates at this time. In the second half-time, the training focuses on accuracy optimization with the lower \(\gamma\) value. Thus, the training accuracy tends to increase, while the validation accuracy rises, too, with a lower fluctuation. Although the MST distance also escalates again on all layers, these values are lower than at the initial time, especially on deeper layers. This is the reason why we can save computational costs by using this approach. **Effect of hyper-parameter \(\lambda\)**: The hyper-parameter \(\lambda\) helps scale down the effect of the total MST distance to the new loss function. Each model has a different number of layers and channels that directly affect the total MST distance. Therefore, depending on the model, we adjust this hyper-parameter to balance the loss for accuracy and MST distance. In this section, the VGG-small model and CIFAR-10 dataset are used to estimate how the \(\lambda\) impacts the number of parameters, bit-operations, MST-depth, and accuracy with the range of \(\lambda\) from 1e-7 to 1e-5. Table 2 and Figure 8 provide the number of parameters, bit-operations, MST-depth, accuracy, and the number parameters for each binary convolution layer _w.r.t_ different \(\lambda\) values. Accordingly, the number of parameters, bit-operations, and accuracy tend to be lower with higher \(\lambda\) values. However, the number of parameters/bit-ops is considerably reduced at \(3.3\times\), while the accuracy degradation is acceptable with \(2\)% when increasing the \(\lambda\) from 1e-7 to 1e-5. Besides, the MST depth also decreases when the \(\lambda\) value increases. Especially, when increasing \(\lambda\) from 1e-7 to 5e-7, the depth reduce \(1.67\times\). ### Comparison with SOTA methods To demonstrate the effectiveness of the proposed method, we perform a series of training experiments with various NN models on CIFAR-10, and ImageNet, and then compare them with state-of-the-art methods. Specifically, for each NN model, we provide two experimental results corresponding to two cases 1) apply both the learning optimization in Sec. 4.2 and the MST reordering to the final training result (**Ours 1**); 2) directly the MST reordering to pre-trained BNN models (**Ours 2**). In this paper, the pre-trained BNN models we use for fine-tuning and exploring the MST are provided by Xu _et al_. [33]. In terms of acceleration, a neural network model is fully trained on CIFAR-10 and implemented on an FPGA hardware platform to compare with recent hardware architectures. **Software training comparison**: For CIFAR-10, we \begin{table} \begin{tabular}{c c c c c} \hline \hline \(\lambda\) & MST-depth & \begin{tabular}{c} \#Params \\ (Mbit) \\ \end{tabular} & \begin{tabular}{c} \#Bit-Ops \\ (GOps) \\ \end{tabular} & \begin{tabular}{c} Top-1 Acc. \\ mean \(\pm\) std (\%) \\ \end{tabular} \\ \hline 1e-7 & \(97.3\) & \(1.391\) & \(0.217\) & \(92.17\pm 0.07\) \\ \hline 5e-7 & \(58.3\) & \(0.998\) & \(0.184\) & \(92.09\pm 0.06\) \\ \hline 1e-6 & \(54.0\) & \(0.864\) & \(0.165\) & \(91.99\pm 0.07\) \\ \hline 5e-6 & \(44.0\) & \(0.532\) & \(0.115\) & \(91.14\pm 0.08\) \\ \hline 1e-5 & \(42.3\) & \(0.422\) & \(0.098\) & \(90.17\pm 0.14\) \\ \hline \hline \end{tabular} \end{table} Table 2: MST depth, number of parameters, bit-Ops, and accuracy _w.r.t._ different values of \(\lambda\) on CIFAR-10 VGG-small model. Figure 8: Number of parameters and MST depth on each convolution layer _w.r.t._ different \(\lambda\) values. Figure 7: MST distance and top-1 \(\%\) accuracy during the training process, where the Binary VGG-small model and CIFAR-10 dataset is used for the training experiment with 600 epochs. conduct experiments on three neural BNN models: RestNet-18/20 and VGG-small. we compare our results with RAD [8], IR-Net [24], Adabin [28], ReCU [33], DSQ [11], and SNN [32]. Specifically, as shown in Table 3, when comparing **Ours 1** with the highest-accuracy methods on all models, our proposed method reduces \(8.2\times\) parameters and \(4.9\times\) bit-Ops with a \(0.97\)% accuracy drop on VGG-small. Meanwhile, when directly exploring the MST on the pre-trained models, **Ours 2** gets \(2.8\times\) parameters and \(2.6\times\) bit-Ops reduction without accuracy degradation. On ResNet-18, **Ours 1** gives \(13.50\times\) parameters, \(5.26\times\) bit-Ops reduction with a \(1.29\)% accuracy drop, while **Ours 2** reduces \(2.6\times\) parameters and \(2.5\times\) bit-Ops without compromising accuracy. Additionally, we have \(2.78\times\) parameters, \(2.67\times\) bit-Ops reduction, and a 1% accuracy drop with **Ours 1**; \(2.3\times\) parameters, \(2.35\times\) bit-Ops reduction and \(0\%\) accuracy drop on ResNet-20. Compared **Ours 1** with the compression method SNN [32] on all models, the proposed method obtains a better compression ratio with a maximum reduction of \(9.0\times\) parameters, \(2.78\times\) bit-Ops on ResNet-18 while yielding higher accuracy and up to maximum \(1.4\)% on ResNet-20. Meanwhile, with **Our 2**, we reduce maximum \(1.89\times\) parameters on VGG-small, \(2.35\times\) bit-Ops on ResNet-20, while achieving higher accuracy up to maximum \(2.9\%\) on ResNet-20. For ImageNet, experiments are implemented on ResNet-18/34 and comparison is performed with BNN+ [16], Bi-Real [23], XNOR++ [4], IR-Net [24], Adabin [28], ReCU [33], SNN [32]. According to Table 4, **Ours 1** reduces \(3.2\times\) parameters, \(2.6\times\) bit-Ops on ResNet-18; and \(2.23\times\) parameters, \(2.27\times\) bit-Ops on ResNet-34, compared to the baseline [33], while the accuracy degradation is \(4\)% and \(2.2\)% on ResNet18 and ResNet-34, respectively. **Ours 2** achieves \(2.27\times\) parameters, \(2.34\times\) bit-Ops reduction on ResNet-18; and \(2.2\times\) parameters, \(2.26\times\) bit-Ops reduction without accuracy drop. Compared to SNN with the highest compression ratio recently, our proposed method with **Ours 1** saves \(2.13\times\) parameters and \(1.3\times\) bit-Ops on ResNet-18, \(1.49\times\) parameters and \(1.09\times\) bit-Ops on ResNet-34, while the trained models give higher accuracy of \(0.7\)% and \(1.5\)% on ResNet-18, ResNet-34, respectively. Meanwhile, **Ours 2** without accuracy drop saves up to \(1.51/1.47\times\) parameters and \(1.23/1.08\times\) bit-Ops on ResNet-18/34, respectively. **Hardware implementation comparison**: Table 5 provides the hardware performance comparison with previous works using CIFAR-10 for the training. we compare the proposed architecture with FINN [30], FINN-R [3] from Xilinx, ReBNet [10] and the design in [31]. Accordingly, the proposed accelerator takes the leading place in both speed and area efficiency (FPS/LUTs). More specifically, it performs \(1.6\times\) faster than the fastest previous work (FINN), while obtaining \(3.7\times\) area efficiency with a little higher accuracy. In addition, compared to [31], our method gains over \(1.81\times\) area efficiency. Besides, to compare our method with the K-mean approach, we implement another design from the same training result with the K-mean method. According to Table 5, our method performs better \(1.25\times\) in terms of resources and area efficiency. ## 6 Conclusion In conclusion, this paper introduces a comprehensive method for compressing BNNs, which covers the entire process from learning to hardware implementation. By utilizing the minimum spanning tree (MST), we effectively re \begin{table} \begin{tabular}{c|l|c|c|c} \hline \hline N-work & Method & \begin{tabular}{c} \#Params \\ (Mbit) \\ \end{tabular} & \begin{tabular}{c} \#Bit-Ops \\ (GOps) \\ \end{tabular} & \begin{tabular}{c} Top-1 Acc. \\ (\%) \\ \end{tabular} \\ \hline \multirow{8}{*}{ResNet} & BNN+[16] & 10.99 & 1.677 & 53.0 \\ & Bi-Real[23] & 10.99 & 1.677 & 56.4 \\ & XNOR++[4] & 10.99 & 1.677 & 57.1 \\ & IR-Net[24] & 10.99 & 1.677 & 58.1 \\ 18 & Adabin[28] & 10.99 & 1.677 & 63.1 \\ & ReLU[33] & 10.99 & 1.677 & 61.0 \\ & SNN[32] & 7.32 & 0.883 & 56.3 \\ & **Ours 1[2]** & 3.43\(|\)4.84 & 0.636\(|\)0.716 & 57.0\(|\)61.2\({}^{*}\) \\ \hline \multirow{8}{*}{ResNet} & Bi-Real[23] & 21.09 & 3.526 & 62.2 \\ & IR-Net[24] & 21.09 & 3.526 & 62.9 \\ \cline{1-1} & RedSU[28] & 21.09 & 3.526 & 66.4 \\ \cline{1-1} & Adabin[28] & 21.09 & 3.526 & 65.1 \\ \cline{1-1} & SNN[32] & 14.06 & 1.696 & 61.4 \\ \cline{1-1} & **Ours 1[2]** & 9.44\(|\)9.51 & 1.550\(|\)1.558 & 62.9\(|\)65.4\({}^{*}\) \\ \hline \multicolumn{2}{l}{\({}^{*}\) Accuracy after fine-tuning is at [https://github.com/z-hXu/ReCU](https://github.com/z-hXu/ReCU) [33].} \\ \end{tabular} \end{table} Table 4: Comparison with the state-of-the-art methods on ImageNet. The bit-width is 1 for both activation and weight. \begin{table} \begin{tabular}{c|l|c|c|c} \hline \hline \multirow{2}{*}{N-work} & \multirow{2}{*}{Method} & \begin{tabular}{c} \#Params \\ (Mbit) \\ \end{tabular} & \begin{tabular}{c} \#Bit-Ops \\ (GOps) \\ \end{tabular} & \begin{tabular}{c} Top-1 Acc. \\ (\%) \\ \end{tabular} \\ \hline \multirow{8}{*}{VGG} & RAD[8] & 4.571 & 0.603 & 90.0 \\ & IR-Net[24] & 4.571 & 0.603 & 90.4 \\ & RBNN[20] & 4.571 & 0.603 & 91.3 \\ & Adabin[28] & 4.571 & 0.603 & 92.3 \\ & ReCU[33] & 4.571 & 0.603 & 92.4 \\ & SNN[32] & 3.047 & 0.194 & 91.0 \\ & **Ours 1[2]** & 0.556\(|\)1.611 & 0.122\(|\)0.232 & 91.5\(|\)93.3\({}^{*}\) \\ \hline \multirow{8}{*}{ResNet} & RAD[8] & 10.99 & 0.547 & 90.5 \\ & IR-Net[24] & 10.99 & 0.547 & 91.5 \\ & RBNN[20] & 10.99 & 0.547 & 92.2 \\ & ReLU[33] & 10.99 & 0.547 & 92.8 \\ & Adabin[28] & 10.99 & 0.547 & 93.1 \\ & SNN[33] & 7.324 & 0.289 & 91.0 \\ & **Ours 1[2]** & 0.814\(|\)4.293 & 0.104\(|\)0.216 & 91.6\(|\)93.2\({}^{*}\) \\ \hline \multirow{8}{*}{ResNet} & DSQ[11] & 0.267 & 0.040 & 84.1 \\ & IR-Net[24] & 0.267 & 0.040 & 86.5 \\ \cline{1-1} & RBNN[20] & 0.267 & 0.040 & 86.5 \\ \cline{1-1} & ReLU[33] & 0.267 & 0.040 & 87.4 \\ \cline{1-1} & Adabin[28] & 0.267 & 0.040 & 88.2 \\ \cline{1-1} & SNN[32] & 0.178 & 0.040 & 85.1 \\ \cline{1-1} & **Ours 1[2]** & 0.096\(|\)0.116 & 0.015\(|\)0.017 & 86.5\(|\)88.0\({}^{*}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison with the state-of-the-art methods on CIFAR-10. The bit-width is 1 for both activation and weight. duce the computation cost of binary convolution calculation. To optimize the MST during the training stage, a learning algorithm is proposed. Moreover, we present a hardware implementation for the inference task. Our experimental results demonstrate that the proposed method significantly reduces computation costs while maintaining high accuracy, making it promising for practical applications. ## 7 Acknowledgements This work was supported by the NRF grant funded by the Korea government(MSIT) (No. RS-2023-00207816), and part by IITP Grant funded by MSIT (Artificial Intelligence Innovation Hub) under Grant 2021-0-02068, (No. RS-2022-00155911, AI Convergence Innovation Human Resources Development (Kyung Hee University)), IITP No.2019-0-01287, Evolvable Deep Learning Model Generation Platform for Edge Computing and ITRC support program (IITP-2023-RS-2023-00258649).
2308.15344
Imperceptible Adversarial Attack on Deep Neural Networks from Image Boundary
Although Deep Neural Networks (DNNs), such as the convolutional neural networks (CNN) and Vision Transformers (ViTs), have been successfully applied in the field of computer vision, they are demonstrated to be vulnerable to well-sought Adversarial Examples (AEs) that can easily fool the DNNs. The research in AEs has been active, and many adversarial attacks and explanations have been proposed since they were discovered in 2014. The mystery of the AE's existence is still an open question, and many studies suggest that DNN training algorithms have blind spots. The salient objects usually do not overlap with boundaries; hence, the boundaries are not the DNN model's attention. Nevertheless, recent studies show that the boundaries can dominate the behavior of the DNN models. Hence, this study aims to look at the AEs from a different perspective and proposes an imperceptible adversarial attack that systemically attacks the input image boundary for finding the AEs. The experimental results have shown that the proposed boundary attacking method effectively attacks six CNN models and the ViT using only 32% of the input image content (from the boundaries) with an average success rate (SR) of 95.2% and an average peak signal-to-noise ratio of 41.37 dB. Correlation analyses are conducted, including the relation between the adversarial boundary's width and the SR and how the adversarial boundary changes the DNN model's attention. This paper's discoveries can potentially advance the understanding of AEs and provide a different perspective on how AEs can be constructed.
Fahad Alrasheedi, Xin Zhong
2023-08-29T14:41:05Z
http://arxiv.org/abs/2308.15344v1
# Imperceptible Adversarial Attack on Deep Neural Networks from Image Boundary ###### Abstract Although Deep Neural Networks (DNNs), such as the convolutional neural networks (CNN) and Vision Transformers (ViTs), have been successfully applied in the field of computer vision, they are demonstrated to be vulnerable to well-sought Adversarial Examples (AEs) that can easily fool the DNNs. The research in AEs has been active, and many adversarial attacks and explanations have been proposed since they were discovered in 2014. The mystery of the AE's existence is still an open question, and many studies suggest that DNN training algorithms have blind spots. The salient objects usually do not overlap with boundaries; hence, the boundaries are not the DNN model's attention. Nevertheless, recent studies show that the boundaries can dominate the behavior of the DNN models. Hence, this study aims to look at the AEs from a different perspective and proposes an imperceptible adversarial attack that systemically attacks the input image boundary for finding the AEs. The experimental results have shown that the proposed boundary attacking method effectively attacks six CNN models and the ViT using only 32% of the input image content (from the boundaries) with an average success rate (SR) of 95.2% and an average peak signal-to-noise ratio of 41.37 dB. Correlation analyses are conducted, including the relation between the adversarial boundary's width and the SR and how the adversarial boundary changes the DNN model's attention. This paper's discoveries can potentially advance the understanding of AEs and provide a different perspective on how AEs can be constructed. Adversarial Attack Deep Neural Networks Image Boundary ## 1 Introduction Deep Neural Networks (DNNs) have successfully advanced many applications of computer vision such as image classification [1, 2], object detection [3, 4, 5, 6], saliency detection [7, 8, 9], facial recognition [10, 11, 12], and many others. However, such DNNs are shown to be vulnerable to adversarial attacks that can convert Clean Examples, correctly classified by a DNN model, to Adversarial Examples [13]. The conversion process is a mathematical process that tweaks the Clean Example in well-chosen directions in the feature space with some energy, usually called _epsilon_, until the DNN model produces a wrong output, and the total of the tweaks is called adversarial perturbation [14, 15]. The adversarial perturbations can be categorized into two categories (explained in Section 2): imperceptible to the human eye and perceptible; this study focuses on the first category. To deceive the human vision, the invisibility of the adversarial perturbations is one of the fundamental considerations for the imperceptible adversarial perturbations. Hence, humans should barely notice the difference between a Clean Example and its Adversarial Example. For instance, the imperceptibility of the adversarial perturbations produced by the Fast Gradient Sign Methods [14] depends on the value of _epsilon_ where higher values of epsilon lead to conspicuous differences between a Clean Example and its Adversarial Example. Hence, much research addressed the imperceptibility by introducing algorithms that iteratively increased the adversarial perturbation constrained on flipping the output of the DNN model [16, 15]. Such studies changed the whole features in the input space; however, some features were seen as needless to contribute to the conversion of the DNN model's output. Consequently, a line of research went to investigate the possibility of finding Adversarial Examples by attacking only the important features in the input space. Hence, various adversarial attacks were proposed in which a saliency map was employed to find and attack the salient features in the input space, such as Jacobian-based Saliency Map Attack (JSMA) [19], Maximal Jacobian-based Saliency map attack (MJSMT) [20], and Probabilistic Jacobian based Saliency map attack (PJSMT) [21]. Moreover, Qian _et al._[22] proposed an adversarial attack that used an attention model to find a small continuous area, called a contributing feature region (CFR), in the input to attack; they called that perturbation _imperceptible adversarial patch_. Such studies produced imperceptible perturbations; however, their arguments for successfully finding the Adversarial Examples were based on finding and attacking important features in the input space and ignoring the other features that were seen as unimportant in the adversarial attacks. Unimportant features could still be critical to cause Adversarial Examples. Hence, it is possible to only modify a small area, and include unimportant features in the input space to change a Clean Example to an Adversarial Example. Here, the modifications usually came as adversarial patches [23, 17, 24] or adversarial frames [18, 18] that covered an unimportant area and were able to make that area contributing to find the Adversarial Examples. Also, such patches or frames could partially cover the important features in the input space and make the DNN's model classify the input adversarially [25, 23]. Moreover, Su _et al._[26]introduced the application of Differential Evolution (DE) to find the Adversarial Examples by changing several features ranging from one to five features. Usually, the values of those DE-selected features extremely contrast with the neighboring features; the DE algorithm usually, selected from the important features; but it also could choose features that were not overlapped with the main object in the input. These studies usually lead to salient noises that can be noticed by human vision easily. This study is motivated to investigate the possibility of attacking the DNN's models using only unimportant features in the input space while still keeping the adversarial perturbation imperceptible to the human vision as shown in Figure1. Hence, we propose an adversarial attack that systemically attacks the input image boundaries and increases the width of the adversarial boundaries until finding the Adversarial Example. We choose the input boundaries for two reasons: first, human vision tends to focus on the center of images and ignore the boundary, which is known as the center bias [27, 28, 29]; second, although the input image boundaries usually do not overlap with the salient objects, they can be utilized to (i) improve the model performance as discussed in some special padding techniques of DNN models [30, 31, 32]; and (ii) encode the absolute position information in the semantic representation learning [33, 34]. Thus, image boundaries can have a desirable property in the imperceptible adversarial attack: image boundaries can be unimportant in human vision, but dominate a deep learning model. Moreover, this study aims at studying the weakness of the DNN models from different perspectives such as correlating the boundaries to the Adversarial Examples regardless of what adversarial label the model produces. Henceforth, our experiments focus on the un-targeted attack and the white-box setup where the attacker has full access to the model. Our findings have the potential to help advance the understanding of Adversarial Examples and provide insights into what can be the reasons behind the existence of Adversarial Examples. Our contributions are three-manifold: * Proposing a novel adversarial attack that systemically attacks the input image from the boundaries where the width of the attacked boundaries increases until the Adversarial Example is found. Our adversarial attack was effective with an average success rate of 95.2% when attacking six CNN models and the ViT while only modifying less than 32% of the input image content. * Attacking only the input image boundaries, so that improving the imperceptibility of the adversarial perturbations, which is supported by an average peak signal-to-noise ratio (\(PSNR\)) of \(41.37\) in the experiments. * Correlating the Adversarial Examples to the unimportant features (_i.e._, image boundaries) to provides a different perspective to understand the Adversarial Examples. We show the boundary width required to achieve a desired success rate, and how a model's attention changes when attacked by the proposed boundary adversarial examples. Figure 1: Three Adversarial Examples crafted from the same Clean Example by three different adversarial attacks: A) Patch of [17], B) 5-pixel Frame of [18], and C) 5-pixel Boundary of our attack. The true label is golf ball while the three attacks agree on the Adversarial Label (parachute). The remainder of this paper is organized as follows. In Section 2, we review and categorize the related work. Section 3 discusses our approach for our adversarial attack followed by evaluation results in Section 4. Finally, Section 5 concludes with the discussion on evaluation and highlights some of the future work in this sector. ## 2 Related Work This section briefly reviews the literature by dividing the adversarial perturbations into two main categories: Imperceptible Perturbations and Perceptible Perturbations. ### Imperceptible Perturbations At the early stage of the field, the Adversarial Examples [13] were introduced to be imperceptible to human vision. Hence, the addition of an adversarial perturbation to the Clean Example has barely a human-vision effect on the Clean Example. Based on the coverage of the Imperceptible Perturbations, there are two types of adversarial perturbations: Fully Coverage based and Partially Coverage based. #### 2.1.1 Fully Coverage Based In this group, the adversarial perturbation is made to be of the same size as the Clean Example. Hence, each feature in the Clean Example will be affected by the corresponding feature in the adversarial perturbation. Szegedy _et al._[13] introduced an expensive adversarial attack which was based on L-BFGS, and guaranteed to find the Adversarial Example. Then, Goodfellow _et al._[14] proposed a cheap and effective adversarial attack called Fast Gradient Sign Method, FGSM. The sign function of the gradients in the FGSM helps determine the modification directions in the input's features. The FGSM used an \(epsilon\) as quantity to equally change the features with their corresponding directions; however, the FGSM did not guarantee to produce the Adversarial Example and the attack depended on the value of \(epsilon\). That is why Kurakin _et al._[16] proposed an iterative FGSM where the input was kept being perturbed by a small \(epsilon\) until the Adversarial Example was found. Moreover, Moosavi-Dezfooli _et al._[15] proposed an adversarial attack, DeepFool, which improved the imperceptibility of the adversarial perturbations by finding the smallest adversarial perturbation that reliably converted a Clean Example to its Adversarial Example. #### 2.1.2 Partially Coverage Based In this group, the adversarial perturbations change a subgroup of features in the input's space to find the Adversarial Example. For instance, Papernot _et al._[19] proposed a Jacobian-based Saliency map Attack, JSMA, that could find the salient features in the input which if attacked, the Adversarial Example could be found. Different studies followed the JSMA to use the Jacobian-based saliency map in their adversarial attacks such as maximal Jacobian based Saliency map attack (MJSMT) [20], and Probabilistic Jacobian based Saliency map attack (PJSMT) [21]. Also, Qian _et al._[22] proposed an adversarial attack that used network explanations to find a small suitable semantic region in the input for adversarial modifications. Moreover, Wu _et al._[35] proposed an adversarial attack that increased the transferability of the Adversarial Examples in the transfer-based black setting. They claim that Adversarial Examples crafted by a source model might not fool another model (target model) and that was due to the over-fitting to the source model. Hence, they proposed a procedure that used an attention mechanism to extract the important features to attack. ### Perceptible Perturbations In this category, the perceptible perturbations occlude small areas in the Clean Example; hence, they are visually conspicuous to human vision but usually ignorable. Also, they come in different sizes and shapes. For example, Brown _et al._[23] proposed an adversarial attack that could produce an adversarial patch when partially covering the Clean Example; it would fool the classifier. Karmon _et al._[17] addressed the problem of the adversarial patch's size by introducing a small noise that can be localized in the Clean Example in a way it did not overlap with the salient object in that example, and still fool the DNN model. Evtimov_et al._[36] proposed Robust Physical World Attack, RP2, that could produce small adversarial stickers in the shape of road vandalism (such as camouflage art and graffiti). Hence, when such an adversarial sticker was physically attached to a road sign, the classifier would misclassify the sign. Sharif _et al._[25] introduced a glass frame that could fool the face identification model. Finally, Zajac _et al._[18] proposed an adversarial frame that could be placed on the edges of a Clean Example and would fool both the image classifier and object detector. ## 3 The Proposed Boundary Attack Algorithm 1 produces an Adversarial Example by attacking the input's boundaries while keeping the remaining part intact. We explain the algorithm by decomposing it into two loops: an outer loop and an inner loop. The former increases the boundary's width while the latter attacks the boundary through the iterative FGSM (I-FGSM). Sections 3.1 and 3.2 detail the discussions loop respectively. ``` 0:\(m\), \(l_{m}\), \(f_{\theta}\), \(\epsilon\), \(minimum\), where \(m\) is the Clean Example assuming the format is a channel last, \(l_{m}\) is the true label, \(f_{\theta}\) is the model, \(\epsilon\) is the initial epsilon, and \(minimum\) is the lower bound for the epsilon. 0:\(m^{\prime}\) which is the Adversarial Example. 1:\(w\gets 1\) /* initial width for edges */ 2:while\(w<40\)do /* the outer loop.*/ 3:\(m^{\prime}\gets m\) /* initial Adversarial Example.*/ 4:\(cnt\gets 0\) 5:while\(cnt<15\)do /* 15 is replaced with 50 for other attacks as explained in Section 4.3.2 */ 6:\(m^{\prime}\gets Boundary_{fgsm}(m^{\prime},w,l_{m},f_{\theta},\epsilon)\) 7:\(l_{m^{\prime}}\gets f_{\theta}(m^{\prime})\) 8:if\(l_{m}\neq l_{m^{\prime}}\)then 9:if\(PSNR>40\) OR \(\epsilon==minimum\)then 10: break outer loop. 11:endif 12: break inner loop. 13:endif 14:\(cnt\gets cnt+1\) 15:endwhile 16:\(w\gets w+1\) 17:\(\epsilon\leftarrow\epsilon*0.75\) 18:endwhile 19:return\(m^{\prime}\) ``` **Algorithm 1** Boundary Attack ### The outer loop The loop increases the boundary's width to be attacked in each iteration. It starts with one-pixel boundary (step \(1\) in Algorithm 1) and keeps the remaining area untouched. If the current boundary's width does not succeed in finding the Adversarial Example, the width will increase by one pixel in the next iteration (step \(16\)). Figure 2 is an example to visually illustrate how the boundary's width increases in each iteration of the outer loop; the width is one-pixel in the \(1^{st}\) iteration, two-pixel in the \(2^{nd}\) iteration, and three-pixel in the \(3^{rd}\) iteration. The outer loop keeps iterating until the inner loop signals a break for the outer loop (explained in Section 3.2). For imperceptibility, the outer loop starts with an \(epsilon\) value, \(\epsilon\), that magnifies the signs of boundaries' gradients ( explained in Section 3.2). Then, the \(\epsilon\) will be decreased by a factor of \(0.75\) in every iteration as shown in Figure 2; however, the decrement process will stop when reaching the lower bound for \(\epsilon\) called \(minimum\) in Algorithm 1 (and not shown in Figure 2). ### The Inner loop The inner loop is steps \(5\) to \(15\) in Algorithm 1. It perturbs the \(m^{\prime}\) with the current \(\epsilon\) value using \(Boundary_{fgsm}\) function (as shown in step \(6\)) which is a version of the \(FGSM\) attacking only the input's boundary. The details of the \(Boundary_{fgsm}\) function is in Algorithm 2 where step \(1\) computes the model's gradients with respect to the input \(m^{\prime}\) given that the true label is \(l_{m}\); step \(2\) masks the middle part of the gradients \(g\) by reassigning them \(zeroes\) and keeps Figure 2: An example of input with a small shape of (8,8,3) to illustrate how the outer loop of Algorithm 1 increases the width of the borders. For example, in the \(1^{st}\) iteration, the width of the border is one-pixel (in green color), it becomes two-pixel in the \(2^{nd}\) iteration attacks, and three-pixel in \(3^{rd}\) iteration. Also, the epsilon decreases by 0.75 in each iteration. the boundary's gradients untouched, step \(3\) is the adversarial perturbation where the sign is multiplied by the \(\epsilon\) value. Lastly, step \(4\) adds the adversarial perturbation to the \(m^{\prime}\) which gives an output that aims at maximizing the distance between the model's output class and the true label \(l_{m}\). The upper bound of the inner loop is set to be \(15\) (step \(5\) in Algorithm 1) which empirically keeps the perturbation imperceptible. The inner loop will keep perturbing the boundary using the \(Boundary_{fgsm}\) function until it finds the Adversarial Example and it then breaks; otherwise, it reaches its upper bound with no success which in turn makes the outer loop go to the next iteration. Moreover, when the Adversarial Example is found, the inner loop will break the outer loop if either of the two following conditions is satisfied (as shown in step \(9\) in Algorithm 1): * When the (\(PSNR\)) is higher than a threshold. The threshold is set to be \(40\) and chosen heuristically. * When the \(\epsilon\) reaches its lower bound \(minimum\); that is because increasing the attack area while the epsilon value, \(\epsilon\), is static, it never gives higher \(PSNR\) than the smaller edges attacked by the same \(\epsilon\). These two conditions make the algorithm efficient at finding an Adversarial Example with the smallest perturbed area and the highest \(PSNR\). ## 4 Experiments and Results This section discusses and analyzes our experimental setup, evaluation metrics, and quantitative results. ### Experimental Setup To test the proposed method, we attack the vision transformer (ViT) [37], and six widely applied CNN models including VGG16 and VGG19 [2], ResNet50 and ResNet101 [1], and EfficientNetB1 and EfficientNetB2 [38]. As for the CNN models, we train and evaluate the models using the Imagente dataset [39] which is a subset of the ImageNet dataset [40] and has ten classes of tench, English springer, cassette player, chain saw, church, French horn, garbage truck, gas pump, golf ball, parachute. The Imagenette has a training dataset with \(9,469\) images and a validation dataset with \(3,925\) images; the size of the images is \((224,224,3)\) and they are also upsampled to \((240,240,3)\) and \((260,260,3)\) for the EfficientNetB1 and EfficientNetB2 respectively. We apply both the Imagenette and Tiny ImageNet [41] to the ViT; we use ViT(1) and ViT(2) to represent the ViT trained and tested on the Imagenette and Tiny ImageNet respectively. The Tiny ImageNet is also a subset of ImageNet, has \(200\) classes, and has a training dataset with \(100,000\) images and a validation dataset with \(10,000\) images. The size of images in the Tiny ImageNet is \((64,64,3)\), and we upsample them to \((224,224,3)\). We replace the last 1000-output layer with a 10-output layer in the CNN models and ViT(1) and to 200-output layer in the ViT(2). We use twenty epochs to fine-tune the output layer in the CNN models while fine-tuning the last transformer encoder and the output layer in the ViTs. The values of the input's pixels are in the range of [0,255] and [0,1] for the CNN models and the ViTs respectively. Hence, we heuristically choose the \(\epsilon\) value and its minimums respectively to be \(10\) and \(3\) when attacking the CNN models while \(0.02\) and \(0.01\) when attacking the ViTs. Table 1 shows the accuracy of the models on the validation dataset. To ensure that the misclassifications are caused by the attack itself and not the model, we only attack the samples that are successfully classified from the validation dataset. ### The Evaluation Metrics Four evaluation metrics are used in the experiments: Success Rate (\(SR\)), Mean Squared Error (\(MSE\)), Mean Absolute Error (\(MAE\)), and the \(PSNR\). The \(SR\) can be defined as followers: \[SR(f_{\theta},M)=\frac{1}{T}\sum_{m\in M}1_{\big{\{}f_{\theta}(m^{\prime})\neq l _{m}\big{\}}}, \tag{1}\] where \(f\) is a model parameterized by \(\theta\), \(M\) represents all the images in the validation dataset that are correctly classified by the \(f\), \(T\) is the number of images in \(M\), \(l_{m}\) is the true label for the Clean Example \(m\), and \(m^{\prime}\) is the perturbed Example crafted from the \(m\). Moreover, the computations of the \(MSE\), \(MAE\), and \(PSNR\) only consider \(N\in M\) where \(N\) represents images in the \(M\) that are both correctly classified by \(f_{\theta}\) and successfully attacked by our adversarial attack. The \(MSE\), \(MAE\), and \(PSNR\) can be respectively defined by the following equations: \[MSE(N)=\frac{1}{T}\sum_{n\in N}mean((n-n^{\prime})^{2}), \tag{2}\] \[MAE(N)=\frac{1}{T}\sum_{n\in N}mean(|n-n^{\prime}|), \tag{3}\] \[PSRN(N)=\frac{1}{T}\sum_{n\in N}20*\log{(\frac{max}{\sqrt{mean((n-n^{\prime})^ {2})}})}, \tag{4}\] where \(T\) is the number of images in \(N\), \(n\) and \(n^{\prime}\) are the Clean Example and its Adversarial Example, \(max\) is the highest possible value in the \(N\), and the \(mean\) function is the reduced mean. The purpose of using \(MSE\) and \(MAE\) is to show the difference between the Clean Example and its Adversarial Example from two different evaluation perspectives. Besides the \(MSE\) and \(MAE\), the \(PSNR\) is used to show the ratio of the possible highest power of a signal to the power of that noise that corrupts that signal. Finally, we investigate the model's attention using the Grad-CAM [42] for the Clean Example and its Adversarial Example, and how the adversarial perturbations can change the model's attention significantly. Hence, this experiment can correlate the unimportant features (_i.e._, input image boundaries) with the Adversarial Examples; this correlation hopefully helps advance our understanding of the reasons behind the existence of the Adversarial Examples. ### The Quantitative Results #### 4.3.1 Correlating the \(Sr\) with the boundary's width Figure 3 shows the comparison of our adversarial attack against the six CNN models and the two ViTs in terms of the \(SR\) and the boundary's width. The ViT(2) is the most vulnerable model where the attack starts with a high \(SR\) and achieves around 99.0% with less than a 5-pixel boundary attack in all the \(M\) dataset's images. This highest \(SR\) is attributed to the large number of classes in the Tiny ImageNet which makes it easier for our adversarial attack to confuse the ViT model. Then, the ResNet50 model shows the second highest vulnerability; the ViT(1) becomes comparable to the Resnet50 when attacked with a 25-pixel boundary and both achieve an average \(SR\) of \(99.9\%\) at the 40-pixel boundary. Also, the attack's \(SR\)s against EfficientNetB1 and EfficientNetB2 converge with the other highest \(SR\)s at the boundaries of 30-pixel width. Lastly, the ResNet101, VGG19, and VGG16 are comparable to each other at the 20-pixel boundary, and they also achieve an average \(SR\) of \(96.9\%\) at the 40-pixel boundary. Moreover, Figure 4 shows the relation between different widths for the adversarial boundaries and the \(SR\). The purpose of this analysis is to show how much possible the attacker can produce an Adversarial Example in terms of the perturbation amount. For example, the attacker only needs a 5-pixel boundary (\(8\%\) of the image content) to achieve a \begin{table} \begin{tabular}{|l|l|} \hline **Model** & **Accuracy** \\ \hline VGG16 & 0.9156 \\ \hline VGG19 & 0.9177 \\ \hline ResNet50 & 0.957 \\ \hline ResNet101 & 0.96 \\ \hline EfficientNetB1 & 0.977 \\ \hline EfficientNetB2 & 0.99 \\ \hline ViT(1) & 0.986 \\ \hline ViT(2) & 0.74 \\ \hline \end{tabular} \end{table} Table 1: The accuracy of six CNN models on the validation datatset of Imagenette. The accuracy of ViT(1) and ViT(2) are respectively on the validation datatset of Imagenette and the validation datatset of Tiny ImageNet. success rate of \(55.8\%\), a 10-pixel boundary (\(17\%\) of the image content) to achieve an average \(SR\) of \(81.6\%\), a 20-pixel boundary (\(32\%\) of the image content) to achieve an average \(SR\) of \(95.2\%\), a 30-pixel boundary (\(46\%\) of the image content) to achieve an average \(SR\) of \(97.5\%\), and a 40-pixel boundary (\(58\%\) of the image content) to achieve an average \(SR\) of \(98.7\%\). It is evident that as we increase the width of the boundary, the success rate increases. Hence, the attacker can increase the width of the boundary to increase the possibility of getting an Adversarial Example. #### 4.3.2 Comparative Study Since our adversarial attack integrates two concepts of attacking unimportant features (_e.g._, boundaries) and imperceptibility, it is hard to directly compare our work with the related work in Section 2. However, we modify Algorithm 1 to re-implement three different attacks: the adversarial patch of [17], the adversarial frame of [18], and attacking the whole example by I-FGSM [16] (henceforth, we call it \(Whole\) attack). Then, we compare our attacks with these three attacks using the evaluation metrics mentioned in Subsection 4.2. The modifications are as follows: First, at the beginning of Algorithm 1, we set the patch's size to be \((50,50,3)\) and place it at the top left corner of the Clean Example \(m\); the frame's width is set to be 5-pixel and occludes the original boundaries; the patch and frame are initialized randomly. Secondly, the outer loop is removed, and only the inner loop is included in Algorithm 1. The upper bound for the inner loop is set to be 50 (more than three times of our attack's upper bound); the \(\epsilon\) value is set to be static (_i.e._, equal to the lower bound used in our attack; for example \(3\) and \(0.01\) for the CNN models and ViTs respectively). Lastly, step \(2\) in Algorithm 2 masks the gradients that are out of the patch and frame areas, and no mask in the \(Whole\) attack since I-FGSM attacks the whole image. Figure 4: The relationship between the boundary’s width and the Success Rate (\(SR\)). The \(SR\) in each bar is an average of the \(SR\)s over all the six models and the two ViTs. Figure 3: Our attack’s \(SR\)s when attacking six different models and two ViTs where ViT(1) trained on Imagenette while ViT(2) is the same transformer but trained on the Tiny ImageNet. Table 2 shows the comparison of our attack with the three attacks (adversarial patch, adversarial frame, and the \(Whole\) attack) when attacking the six CNN models. Also, Table 3 shows the same comparison when attacking the ViTs. In both tables, we can observe that the perturbations (differences between the Clean Examples and Adversarial Examples in terms of the \(MSE\) and \(MAE\)) in our attack are significantly lower than all the other attacks against the six models and the ViTs. Moreover, the \(PSNR\)s are shown to be higher for our attack compared to the other attacks against the CNN models and ViTs unless the VGGs attacked by the Frame are higher than ours; however, the average \(PSNR\) for our attack is \(41.37\) which is higher than all the other attacks. Also, The \(SR\)s show that our attacks can be comparable to the \(Whole\) attack and much higher than the other two attacks (adversarial patch and frame). Remarkably, we set the upper bound of the inner loop in Algorithm 1 for the three attacks (\(Whole\) attack, adversarial patch, and frame) to be three times more than our attack for fair comparison (_i.e._, \(15\) to \(50\)), otherwise the \(SR\) for the three attacks would increase if we would increase their upper bounds above \(50\). Finally, if we compare the \(SR\)s for all the attacks in Table 2 and Table 3 against the six CNN models and the ViTs respectively, we can observe that the ViTs are more vulnerable than the CNN models for all the attacks. The goal of showing our attack with various boundary widths for each sample is to investigate if the model's attention for a successful attack will significantly change if the boundary's width increases. For example, the second row in Figure 6 represents a successful five-pixel boundary attack; hence, the model's attention to the unsuccessful attack in the \(2^{nd}\) column (one-pixel boundary attack) still signifies features from the main object in the example (church) similar to the model's attention for no attack in the \(1^{st}\) column with a small change in the size and shape of the attention. However, the five-pixel boundary attack for the same sample as in the \(3^{rd}\) column is successful and made the model's attention totally different, and only small features from the main object are included in the attention; also, the attention changes as the boundary's width increases; but the model still gives the same adversarial label for the different boundary's widths attack against this specific sample. Similarly, for the sample in the \(3^{rd}\) row in Figure 6), the model's attention does not significantly change when our attacks (i.e., one-pixel and five-pixel boundary) are not successful; however, our attack against this sample becomes successful when using a ten-pixel boundary (as in the \(4^{th}\) column) and the attention significantly changes. Also, increasing the width of the boundary in this sample after a successful attack, changes the model's attention and the model give different adversarial labels (i.e., \(5^{th}\) column and \(6^{th}\) column in the last row in 6). Lastly, we show the model's attentions for these three samples when attacked by the adversarial patch and frame as in the \(7^{th}\) and \(8^{th}\) columns in Figure 6. It is evident that in the second and third samples, the patches and frames become the model's attention; the adversarial labels are different from our attack. However, the patch and frame used to attack the first sample are also successful, the model gives the same adversarial label as our attack, and they changes the model's attention but the patch and frame themselves are not the model's attention. Interestingly, we observe that If the sample does not need a strong perturbation as in the case of the \(1^{st}\) sample, all the attacks agree on the model's attention and the adversarial label. But, if the sample needs a strong adversarial perturbation as in the \(2^{nd}\) and \(3^{rd}\) samples, the patch and frame become the attention of the model and do not change the image context; while our boundary attacks can turn on some areas that are not in the boundary and make them significant. We conjecture that our attack perturbs the image context and changes the relationships/contrast between Figure 5: Three samples of successful attacks. The \(1^{st}\) row is a sample that can be attacked by a one-pixel boundary attack as shown in column (C); while attacking the same sample by the patch and frame attacks are respectively shown in columns (A) and (B) in the same row. Similarly, the \(2^{nd}\) row and \(3^{rd}\) row represent the five and ten boundary attacks respectively. The labels are explained in the model’s attentions in Figure 6. different parts of the image so that indirectly forces the model's attention to certain areas. We will further investigate this finding in future work. ## 5 Conclusion and Future Research Directions This paper proposes an imperceptible adversarial attack from the input image boundaries. The proposed adversarial attack is shown to be effective when attacking six different CNN models and the Vision Transformer (ViT) with an average success rate of 95.2% when modifying only 32% of the content, mainly from the input boundaries which usually are ignored by the human vision and do not overlap with the salient objects in the input. Our experimental results of attacking from the image boundaries align with related discoveries in the literature that the performance of CNN models can be dominated by the boundaries. We show that the cutting-edge ViT models are also vulnerable to our proposed boundary adversarial attacks. We find that the ViT model is sensitive to the adversarial perturbations; it is more vulnerable to our attack than the CNN models. We provide some correlation analysis, such as showing how much boundary's width is required to achieve a desired success rate and how the model's attention changes when attacked by our adversarial attack with different boundary's widths. Our findings can potentially advance the understanding of Adversarial Examples and provide a different perspective on how Adversarial Examples can be constructed. Lastly, one research direction to further this study is to determine which boundary in the input image is more dominating in the attacks; we suspect the need for equal adversarial perturbations in all the input boundaries. One possible solution for such an attack is to combine the saliency maps or Jacobian values of the boundaries in attacks.
2303.07432
End-to-end Deformable Attention Graph Neural Network for Single-view Liver Mesh Reconstruction
Intensity modulated radiotherapy (IMRT) is one of the most common modalities for treating cancer patients. One of the biggest challenges is precise treatment delivery that accounts for varying motion patterns originating from free-breathing. Currently, image-guided solutions for IMRT is limited to 2D guidance due to the complexity of 3D tracking solutions. We propose a novel end-to-end attention graph neural network model that generates in real-time a triangular shape of the liver based on a reference segmentation obtained at the preoperative phase and a 2D MRI coronal slice taken during the treatment. Graph neural networks work directly with graph data and can capture hidden patterns in non-Euclidean domains. Furthermore, contrary to existing methods, it produces the shape entirely in a mesh structure and correctly infers mesh shape and position based on a surrogate image. We define two on-the-fly approaches to make the correspondence of liver mesh vertices with 2D images obtained during treatment. Furthermore, we introduce a novel task-specific identity loss to constrain the deformation of the liver in the graph neural network to limit phenomenons such as flying vertices or mesh holes. The proposed method achieves results with an average error of 3.06 +- 0.7 mm and Chamfer distance with L2 norm of 63.14 +- 27.28.
Matej Gazda, Peter Drotar, Liset Vazquez Romaguera, Samuel Kadoury
2023-03-13T19:15:49Z
http://arxiv.org/abs/2303.07432v1
# End-to-End Deformable Attention Graph Neural Network for Single-View Liver Mesh Reconstruction ###### Abstract Intensity modulated radiotherapy (IMRT) is one of the most common modalities for treating cancer patients. One of the biggest challenges is precise treatment delivery that accounts for varying motion patterns originating from free-breathing. Currently, image-guided solutions for IMRT is limited to 2D guidance due to the complexity of 3D tracking solutions. We propose a novel end-to-end attention graph neural network model that generates in real-time a triangular shape of the liver based on a reference segmentation obtained at the pre-operative phase and a 2D MRI coronal slice taken during the treatment. Graph neural networks work directly with graph data and can capture hidden patterns in non-Euclidean domains. Furthermore, contrary to existing methods, it produces the shape entirely in a mesh structure and correctly infers mesh shape and position based on a surrogate image. We define two on-the-fly approaches to make the correspondence of liver mesh vertices with 2D images obtained during treatment. Furthermore, we introduce a novel task-specific identity loss to constrain the deformation of the liver in the graph neural network to limit phenomenons such as flying vertices or mesh holes. The proposed method achieves results with an average error of \(3.06\pm 0.7\) mm and Chamfer distance with L2 norm of \(63.14\pm 27.28\). Matej Gazda + Footnote †: star}\)Matej Gazda performed the work while at Ecole Polytechnique de Montreal, Montreal, QC H3C 3A7, Canada \({}^{\dagger}\) Polytechnique Montreal, Montreal, QC H3C 3A7, Canada \({}^{\dagger}\) Intelligent Information Systems Laboratory, Technical University of Kosice, Kosice 040 12, Slovakia Motion modeling, 3D mesh inference, Attention Graph Neural Network, Liver cancer radiotherapy ## 1 Introduction One of the most commonly used radiotherapy treatment is intensity modulated radiotherapy (IMRT), which consists in the delivery of tightly targeted radiation beams from outside the body. However, it faces complex challenges when experiencing important motion displacements, hence posing a great risk of dose administration to healthy tissue, such as in the liver [1]. Consequently, respiratory motion compensation is an important part of radiotherapy and other non-invasive interventions [2]. To avoid unnecessary damage due to organ displacement caused by respiratory motion, the treated organ must be located and imaged at all times. Unfortunately, image acquisition during treatment with IMRT is limited to 2D cine slices due to time complexity, therefore resulting in a lack of information in out-of-plane data for tumor targeting. Real-time 3D motion tracking of organs would provide necessary tools to accurately follow tumor targets. Several methods, based on convolutional neural networks (CNNs) have been proposed to tackle the problem of modeling 3D data based on 2D signals. Mezheritsky et al. [3] proposed a method that warps the reference volume with the output of a convolutional autoencoder, thus recovering 3D deformation fields with only a pre-treatment volume and a single live 2D image. Cerrolaza et al. [4] proposed a 3D ultrasound fetal skull reconstruction method based on standard 2D ultrasound views of the head using a reconstructive conditional variational autoencoder. Girdhar et al. [5] investigated tackling a number of tasks including voxel prediction from 2D images and 3D model retrieval. However, methods based on CNNs achieved partial success, since they rely on fixed-size inputs. The volumes might have different sizes across scans, body types, and/or machines. The requirement of having volumes reshaped might result in information loss. Recent advances in graph neural networks sparked new advances in many domains, including in medical imaging [6]. Lu et al. [7] leveraged dynamic spatio-temporal graph neural network for cardiac motion analysis. Graph neural networks operate on graph objects, which contrasts to representations obtained by CNN that might lose important surface details. The mesh has more desirable properties for many applications, because they are lightweight, have more shape modeling details, and are better suited for simulating deformations [6]. In this work, we propose a Deformable Attention Graph Neural Network - Single View Liver Reconstruction (DAGNN-SVLR) method, which is an end-to-end trainable model that infers a 3D liver mesh structure at any time during treatment, using a reference triangulated mesh obtained from the segmentation of baseline 3D MRI volume and a single 2D MRI slice captured in real-time. Moreover, we present a new identity loss tailored for this specific task and empirically show that the combination with the Chamfer distance favors good mesh properties and improves the performance of the predictive model. A high-level representation of the proposed approach is shown in Fig. 1. ## 2 Dagnn-Svlr The DAGNN-SVLR model leverages a triangulated liver mesh segmented from the pre-operative T2-w MRI volume and a 2D cine-MR slice taken in real-time during treatment. The model learns a function \(f(.)\) that predicts the deformation of a reference mesh \(M_{r}\) based on the surrogate 2D MRI slice at time \(t\), defined as \(I_{t}\), calculating the mesh at time \(t\) as \(M_{t}=f(M_{r},I_{t})\). A major component of the DAGNN-SVLR model is to determine correspondences between the reference mesh and a surrogate image. Since GNNs work directly on graphs and we cannot simply merge the image with the graph, we explored two solutions as illustrated in Fig. 1. 1. [label=()] 2. As a first option - without the feature pooling, we utilize a residual convolutional network as a feature extractor. The output is a latent representation of the image represented by a one-dimensional vector of size \(dim=128\). ResNet18 [8] network was selected as a compromise between accuracy and speed. 3. As a second option, we propose a feature pooling approach. We use a ResNet18 network, with added padding, so each consequent layer of the network does not downsample the shape. Consequently, the feature maps produced by this ResNet have an identical shape to the input image. In parallel to the feature extraction, the index coordinates of the reference mesh are calculated, so that each vertex of the mesh can be associated with its particular position in the image. Afterward, direct \(3\times 3\) neighborhoods are extracted from each feature map, thereby yielding nine features per feature map for each node in the reference mesh. Once the feature extraction module is processed, the concatenated features of the reference mesh, with the extracted features from the image, are passed through an attention graph neural network to produce the predicted mesh surface. ### Graph Convolutional Neural Network A Graph Convolutional Neural Network (GCNN) is a multilayer neural network that operates directly on graphs. It induces vertex embeddings based on the properties of their local neighborhood and the vertices. Let \(G=(V,E)\) be a triangulated surface of the liver volume, where \(V=\{1,2,\ldots,N\}\) and \(E\subseteq|V|\times|V|\) represent the set of vertices and edges, respectively. Let \(X\in\mathbb{R}^{Nxm}\) be a matrix containing the \(m\) features of all \(N\) vertices. We denote \(\mathcal{N}_{i}=\{j:(i,j)\in E\}\cup\{i\}\) as the neighbor set of vertex \(i\). Then, \(H=\{\mathbf{h}_{1},\mathbf{h}_{2},...,\mathbf{h}_{N}\}\) is a set of input vertex features, where feature vector \(\mathbf{h}_{i}\) is associated with a vertex \(i\in V\). Kipf et al. [9] proposed a Spatial Graph Convolution layer GCN, where the features from neighboring vertices are aggregated with fixed weights, where one vertex embedding is calculated as: \[h_{u}=\Phi(\mathbf{x}_{u},\bigoplus_{v\in\mathcal{N}_{u}}c_{uv}\psi(\mathbf{ x}_{v})) \tag{1}\] where \(\Phi\) and \(\psi\) are learnable functions, \(\mathbf{x}_{u}\) and \(\mathbf{x}_{v}\) are vertex representations of vertex \(u\) and vertex \(v\), \(c_{uv}\) specifies the importance of vertex \(v\) to vertex \(u\)'s representation, \(\mathcal{N}_{u}\) is a local neighborhood and \(\bigoplus\) is an aggregation function, such as a summation or mean. Graph Attention Layers (GAT) [10] extend this work by incorporating a self-attention mechanism that computes the importance coefficients \(a_{uv}\): \[h_{u}=\Phi(\mathbf{x}_{u},\bigoplus_{v\in\mathcal{N}_{u}}a(\mathbf{x}_{u}, \mathbf{x}_{\mathbf{v}}\psi(\mathbf{x}_{v}))). \tag{2}\] The attention mechanism \(a\) is a single-layer feed-forward neural network parameterized by a weight vector \(\mathbf{a}\in\mathbb{R}^{2F^{\prime}}\), where: Figure 1: High-level representation of the DAGNN-SVLR model. The Attention Graph Neural Network deforms the reference mesh from the preoperative phase based on features extracted from supplied 2D surrogate images. We proposed two approaches to feature extraction: without feature pooling and with feature pooling. \[a_{ij}=\exp(LeakyRELU(\mathbf{a}^{T})). \tag{3}\] We employ a neural network consisting of seven alternating GAT and normalization layers. Each GAT layer receives as input \(128\) features and two heads, which output is summed afterward. According to the proposal in [11] we aggregate the features based on a combination of two aggregation functions: summation and mean. ### Loss functions We define three different loss functions to constrain the properties of output liver meshes. Amongst the most critical properties of final meshes are smoothness, closeness, and the absence of so-called flying vertices. #### Chamfer distance The Chamfer distance measures the distance of each vertex to the other set and it serves as a constraint for the location of mesh vertices. The function is continuous, piecewise smooth and is defined as: \[\mathcal{L}_{CD}(P,Q)=\sum_{p}\min_{q}||p-q||_{2}^{2}+\sum_{q}\min_{p}||p-q||_{ 2}^{2}. \tag{4}\] where \(P\) and \(Q\) are predicted and ground truth liver meshes and \(p\) and \(q\) are single points. Contrary to its name, based on the mathematical formulation, it is not a distance function since it does not hold the property of triangle inequality. #### Sampled Chamfer Distance The Chamfer distance, despite its wide usage in the mesh processing domain, is unable to capture any fine information included in the mesh structure. It penalizes the point difference directly resulting in the loss of surface mesh information. To mitigate this drawback, Smith et al. [12] introduced a training objective that operates on a local surface defined by vertices sampled by a differentiable sampling procedure. Given a facet defined by 3 vertices \(\{v_{1},v_{2},v_{3}\}\in\mathbb{R}^{3}\), uniform sampling is achieved by: \[s=(1-\sqrt{r_{1}})v_{1}+(1-r_{2})\sqrt{r_{1}}v_{2}+\sqrt{r_{1}}r_{2}v_{3} \tag{5}\] where s is a point inside the surface defined by the facet, \(r_{1},r_{2}\sim U[0,1]\) The loss function is defined as: \[\mathcal{L}_{SCD}(P,Q)=\sum_{p\in S}\min_{f\in P}dist(p,\hat{f})+\sum_{q\in S} \min_{f\in Q}dist(q,f) \tag{6}\] where \(P\) is the first mesh, \(Q\) is the ground truth mesh, \(\hat{S}\) and \(S\) are the sampled points of the predicted mesh and ground truth mesh \(f\) and \(\hat{f}\) are faces and \(dist\) is a function computing the distance between a point and a triangular facet. #### Identify loss The identity loss penalizes substantial changes in the vertex positions if the surrogate image represents the actual state of the mesh. Given a mesh \(M_{t}\), the surrogate signal at time \(t\)\(I_{t}\), and model that infers current mesh \(\hat{M}_{t}\) based on surrogate image and a reference mesh \(M_{r}\), \(\hat{M}_{t}=f(M_{r},I_{t})\) we define identity loss as: \[\mathcal{L}_{I}=\mathcal{L}_{CD}(M,\hat{M}) \tag{7}\] where \(L_{CD}\) is a Chamfer loss or a sampled Chamfer loss. Finally, DAGNN calculates the final loss as \(\mathcal{L}=\mathcal{L}_{\mathcal{SCD}}+\alpha\mathcal{L}_{I}\) where \(\alpha\) is a hyperparameter. ## 3 Results We evaluated the proposed approach using a 4D-MRI liver dataset acquired from \(25\) volunteers [13]. The volume dimensions were \(176\times 176\times 32\) with a pixel spacing of \(1.7\times 1.7\)mm\({}^{2}\) and a slice thickness of \(3.5\) mm. Reference meshes were created as a closed surface of the liver segmentation from each of the subject's inhale phases. Ground truth meshes of other temporal sequences were then acquired by using deformation field calculated by Elastix deformable registration [14]. Volumes were resized for the registration to \(64\times 64\times 32\) due to computational complexity. The number of vertices in meshes were from interval \((1300,2000)\). When sampling loss was used, an empirically determined value of \(1000\) was chosen. As for preprocessing steps, we centered and normalized the scale into the interval \((-1,1)\). For validation purposes, we performed a 10-fold cross-validation. The model was trained with a batch size of one with five steps accumulation gradients and with Adam optimizer with a learning rate \(1e^{-5}\). We set the weight of identity function \(L_{I}\) to \(0.05\). Visualizations of sample predictions and their ground truth can be seen in Fig. 2. It is clear that the inferred shapes are compliant with important Figure 2: Visualization of a signed distance (in mm) from predicted to ground mesh for two subjects. complete and smooth surfaces. Additionally, in Fig. 3 we show ground truth and predicted delineations obtained from inferred mesh in the axial, sagittal, and coronal planes. The average error and average Chamfer distance over the entire test sequence is presented in Table 1. We used the Chamfer distance with L2 norm and errors calculated by unsigned distance of two meshes to measure the performance. The results of our model are divided into two main categories, with and without feature pooling. To compare our method with a state-of-the-art approach, we chose the Node2Vec [15] method that supports working with graph data. Node2Vec was trained for \(100\) epochs and found embeddings were concatenated with extracted features, similarly to our model. Feature pooling, as used in this study, slightly outperforms the utilization of a single feature vector with respect to average error in all cases. This confirms our hypothesis, that leveraging sampling loss improves performance for both feature extraction methods. The average error for feature pooling subsided from \(3.44\pm 0.91\)mm to \(3.23\pm 0.64\)mm, and for the approach without FP from \(3.46\pm 0.70\)mm to \(3.26\pm 0.78\)mm. Adding identity loss during the training improves the results from \(3.23\pm 0.91\)mm and \(3.26\pm 0.78\)mm to \(3.05\pm 0.75\)mm and \(3.06\pm 0.7\)mm respectively, thus representing a performance improvement of \(5.6\)% and \(6.2\)%. Interestingly, the test results of Chamfer distance are higher when the sampling loss was used without identity loss for feature pooling. We hypothesize that the model was trained using a loss that sampled points on the surface, but for the testing part conventional Chamfer loss was used. This error in Chamfer distance diminishes when identity loss is added. Next, we present the average error in millimeters for three subjects over time in Fig. 4. The plot of the subject in first row has spikes that introduce errors yielding more than \(5\) mm. The other two subjects have very similar errors over time oscillating over a value of \(2.5\)mm without any outliers. The model repeatedly achieve error as low as \(1.8\)mm. ## 4 Conclusion We presented a novel approach for single-view liver surface reconstruction from a surrogate 2D signal based on the combination of an attention graph neural network with a fully convolutional neural network. Our model was successful in generating full meshes with proper topology, and position and with an inference time of 0.002 seconds. We have shown that the model benefits considerably from the proposed identity loss, with a feature pooling process. Several potential extensions will be addressed in future work, such as liver motion prediction in the graph domain or using a sequence of surrogate images for liver reconstruction, instead of using a single view. \begin{table} \begin{tabular}{|c|c c|} \hline **Method** & **Chamfer distance with L2 norm** & **Avg. error (mm)** \\ \hline Node2Vec [15] & \(79.45\pm 48.67\) & \(5.0\pm 3.7\) \\ FP + \(\mathcal{L}_{CD}\) & \(62.85\pm 25.18\) & \(3.44\pm 0.91\) \\ FP + \(\mathcal{L}_{SCD}\) & \(82.25\pm 24.40\) & \(3.23\pm 0.64\) \\ FP + \(\mathcal{L}_{SCD}\) + \(\mathcal{L}_{I}\) & \(63.14\pm 27.28\) & \(\mathbf{3.05\pm 0.75}\) \\ No FP + \(\mathcal{L}_{CD}\) & \(67.226\pm 5.57\) & \(3.46\pm 0.70\) \\ No FP + \(\mathcal{L}_{SCD}\) & \(66.117\pm 26.48\) & \(3.26\pm 0.78\) \\ No FP + \(\mathcal{L}_{SCD}\) + \(\mathcal{L}_{I}\) & \(\mathbf{61.34\pm 25.63}\) & \(3.06\pm 0.7\) \\ \hline \end{tabular} \end{table} Table 1: Prediction results for different loss functions, with and without feature pooling (FP). Figure 4: Average error (in mm) over free-breathing sequences for three selected subjects. Figure 3: Comparison of ground truth (green) and predicted (red) segmentations. Each row depicts different volunteer acquisitions. ## 5 Compliance with Ethical Standards This study was performed in line with the principles of the Declaration of Helsinki. Approval was granted by the local Institutional Review Board. ## 6 Acknowledgments This research has been funded in part by the Natural Sciences and Engineering Research Council of Canada (NSERC). The authors have no relevant financial or non-financial interests to disclose.
2303.11536
Indeterminate Probability Neural Network
We propose a new general model called IPNN - Indeterminate Probability Neural Network, which combines neural network and probability theory together. In the classical probability theory, the calculation of probability is based on the occurrence of events, which is hardly used in current neural networks. In this paper, we propose a new general probability theory, which is an extension of classical probability theory, and makes classical probability theory a special case to our theory. Besides, for our proposed neural network framework, the output of neural network is defined as probability events, and based on the statistical analysis of these events, the inference model for classification task is deduced. IPNN shows new property: It can perform unsupervised clustering while doing classification. Besides, IPNN is capable of making very large classification with very small neural network, e.g. model with 100 output nodes can classify 10 billion categories. Theoretical advantages are reflected in experimental results.
Tao Yang, Chuang Liu, Xiaofeng Ma, Weijia Lu, Ning Wu, Bingyang Li, Zhifei Yang, Peng Liu, Lin Sun, Xiaodong Zhang, Can Zhang
2023-03-21T01:57:40Z
http://arxiv.org/abs/2303.11536v1
# Indeterminate Probability Neural Network ###### Abstract We propose a new general model called IPNN - Indeterminate Probability Neural Network, which combines neural network and probability theory together. In the classical probability theory, the calculation of probability is based on the occurrence of events, which is hardly used in current neural networks. In this paper, we propose a new general probability theory, which is an extension of classical probability theory, and makes classical probability theory a special case to our theory. Besides, for our proposed neural network framework, the output of neural network is defined as probability events, and based on the statistical analysis of these events, the inference model for classification task is deduced. IPNN shows new property: It can perform unsupervised clustering while doing classification. Besides, IPNN is capable of making very large classification with very small neural network, e.g. model with 100 output nodes can classify 10 billion categories. Theoretical advantages are reflected in experimental results. (Source code: [https://github.com/Starfruit007/ipnn](https://github.com/Starfruit007/ipnn)) Machine Learning, ICML ## 1 Introduction Humans can distinguish at least 30,000 basic object categories (Biederman, 1987), classification of all these would have two challenges: It requires huge well-labeled images; Model with softmax for large scaled datasets is computationally expensive. Zero-Shot Learning - ZSL (Lampert et al., 2009; Fu et al., 2018) method provides an idea for solving the first problem, which is an attribute-based classification method. ZSL performs object detection based on a human-specified high-level description of the target object instead of training images, like shape, color or even geographic information. But labelling of attributes still needs great efforts and expert experience. Hierarchical softmax can solve the computationally expensive problem, but the performance degrades as the number of classes increase (Mohammed and Umaashankar, 2018). Probability theory has not only achieved great successes in the classical area, such as Naive Bayesian method (Cao, 2010), but also in deep neural networks (VAE (Kingma and Welling, 2014), ZSL, etc.) over the last years. However, both have their shortages: Classical probability can not extract features from samples; For neural networks, the extracted features are usually abstract and cannot be directly used for numerical probability calculation. What if we combine them? There are already some combinations of neural network and bayesian approach, such as probability distribution recognition (Su and Chou, 2006; Kocadagli and Askgil, 2014), Bayesian approach are used to improve the accuracy of neural modeling (Morales and Yu, 2021), etc. However, current combinations do not take advantages of ZSL method. We propose an approach to solve the mentioned problems, and our contributions are summarized as follows: * indeterminate probability theory, which is an extension of classical probability theory, and makes classical probability theory a special case to our theory. * We interpret the output neurons of neural network as events of discrete random variables, and indeterminate probability is defined to describe the uncertainty of the probability event state. * We propose a novel unified combination of (indeterminate) probability theory and deep neural network. The neural network is used to extract attributes which are defined as discrete random variables, and the inference model for classification task is derived. Besides, these attributes do not need to be labeled in advance. The rest of this paper is organized as follows: In Section 2, we first introduce a coin toss game as example of human cognition to explain the core idea of IPNN. In Section 3, the model architecture and indeterminate probability is derived. Section 4 discusses the training strategy and related hyperparameters. In Section 5, we evaluate IPNN and make an impact analysis on its hyper-parameters. Finally, we put forward some future research ideas and conclude the paper in Section 6. ## 2 Background Let's first introduce a small game - coin toss: a child and an adult are observing the outcomes of each coin toss and record the results independently (heads or tails), the child can't always record the results correctly and the adult can record it correctly, in addition, the records of the child are also observed by the adult. After several coin tosses, the question now is, suppose the adult is not allowed to watch the next coin toss, what is the probability of his inference outcome of next coin toss via the child's record? As shown in Figure 1, random variables X is the random experiment itself, and \(X=x_{k}\) represent the \(k^{th}\) random experiment. Y and A are defined to represent the adult's record and the child's record, respectively. And \(hd,tl\) is for heads and tails. For example, after 10 coin tosses, the records are shown in Table 1. We formulate X compactly with the ground truth, as shown in Table 2 and Table 3. Through the adult's record Y and the child's records A, we can calculate the conditional probability of Y given A, as shown in Table 4. We define this process as observation phase. For next coin toss (\(X=x_{11}\)), the question of this game is formulated as calculation of the probability \(P^{A}(Y|X)\), superscript A indicates that Y is inferred via record A, not directly observed by the adult. For example, given the next coin toss \(X=hd=x_{11}\), the \begin{table} \begin{tabular}{c c c c} \hline \hline Experiment & Truth & A & Y \\ \(X=x_{1}\) & \(hd\) & \(A=hd\) & \(Y=hd\) \\ \(X=x_{2}\) & \(hd\) & \(A=hd\) & \(Y=hd\) \\ \(X=x_{3}\) & \(hd\) & \(A=hd\) & \(Y=hd\) \\ \(X=x_{4}\) & \(hd\) & \(A=hd\) & \(Y=hd\) \\ \(X=x_{5}\) & \(hd\) & \(\mathbf{A=t1}\) & \(Y=hd\) \\ \(X=x_{6}\) & \(tl\) & \(A=tl\) & \(Y=tl\) \\ \(X=x_{7}\) & \(tl\) & \(A=tl\) & \(Y=tl\) \\ \(X=x_{8}\) & \(tl\) & \(A=tl\) & \(Y=tl\) \\ \(X=x_{9}\) & \(tl\) & \(A=tl\) & \(Y=tl\) \\ \(X=x_{10}\) & \(tl\) & \(A=tl\) & \(Y=tl\) \\ \(X=x_{11}\) & \(hd\) & A=? & Y=? \\ \hline \hline \end{tabular} \end{table} Table 1: Example of 10 times coin toss outcomes \begin{table} \begin{tabular}{c c c} \hline \hline \(\frac{\#(Y,X)}{\#(X)}\) & \(Y=hd\) & \(Y=tl\) \\ \hline \(X=hd\) & 5/5 & 0 \\ \(X=tl\) & 0 & 5/5 \\ \hline \(\frac{\#(A,X)}{\#(X)}\) & \(A=hd\) & \(A=tl\) \\ \hline \(X=hd\) & 4/5 & 1/5 \\ \(X=tl\) & 0 & 5/5 \\ \hline \hline \end{tabular} \end{table} Table 3: The adult’s and child’s records: \(P(Y|X)\) and \(P(A|X)\) Figure 1: Example of coin toss game. \begin{table} \begin{tabular}{c c c} \hline \hline \(\frac{\#(Y,X)}{\#(X)}\) & \(Y=hd\) & \(Y=tl\) \\ \hline \(A=hd\) & 4/4 & 0 \\ \(A=tl\) & 1/6 & 5/6 \\ \hline \hline \end{tabular} \end{table} Table 4: Results of observation phase: \(P(Y|A)\) child's record has then two situations: \(P(A=hd|X=hd=x_{11})=4/5\) and \(P(A=tl|X=hd=x_{11})=1/5\). With the adult's observation of the child's records, we have \(P(Y=hd|A=hd)=4/4\) and \(P(Y=hd|A=tl)=1/6\). Therefore, given next coin toss \(X=hd=x_{11}\), \(P^{A}(Y=hd|X=hd=x_{11})\) is the summation of these two situations: \(\frac{4}{5}\cdot\frac{4}{4}+\frac{1}{5}\cdot\frac{1}{6}\). Table 5 answers the above mentioned question. Let's go one step further, we can find that even the child's record is written in unknown language (e.g. \(A\in\{ZHENG,FAN\}\)), Table 4 and Table 5 can still be calculated by the man. The same is true if the child's record is written from the perspective of attributes, such as color, shape, etc. Hence, if we substitute the child with a neural network and regard the adult's record as the sample labels, although the representation of the model outputs is unknown, the labels of input samples can still be inferred from these outputs. This is the core idea of IPNN. ## 3 IPNN ### Model Architecture Let \(X\in\{x_{1},x_{2},\ldots,x_{n}\}\) be training samples (\(X=x_{k}\) is understood as \(k^{th}\) random experiment - select one train sample.) and \(Y\in\{y_{1},y_{2},\ldots,y_{m}\}\) consists of \(m\) discrete labels (or classes), \(P(y_{l}|x_{k})=y_{l}(k)\in\{0,1\}\) describes the label of sample \(x_{k}\). For prediction, we calculate the posterior of the label for a given new input sample \(x_{n+1}\), it is formulated as \(P^{\text{A}}\left(y_{l}\mid x_{n+1}\right)\), superscript A stands for the medium - model outputs, via which we can infer label \(y_{l},\;\;l=1,2,\ldots,m\). After \(P^{\text{A}}\left(y_{l}\mid x_{n+1}\right)\) is calculated, the \(y_{l}\) with maximum posterior is the predicted label. Figure 2 shows IPNN model architecture, the output neurons of a general neural network (FFN, CNN, Resnet (He et al., 2016), Transformer (Vaswani et al., 2017), Pretrained-Models (Devlin et al., 2019), etc.) is split into N unequal/equal parts, the split shape is marked as Equation (1), hence, the number of output neurons is the summation of the split shape, see Equation (2). Next, each split part is passed to'softmax', so the output neurons can be defined as discrete random variable \(A^{j}\in\left\{a^{j}_{1},a^{j}_{2},\ldots,a^{j}_{M_{j}}\right\},j=1,2,\ldots,N\), and each neuron in \(A^{j}\) is regarded as an event. After that, all the random variables together forms the N-dimensional joint sample space, marked as \(\mathbb{A}=(A^{1},A^{2},\ldots,A^{N})\), and all the joint sample points are fully connected with all labels \(Y\in\{y_{1},y_{2},\ldots,y_{m}\}\) via conditional probability \(P\left(Y=y_{l}|A^{1}=a^{1}_{i_{1}},A^{2}=a^{2}_{i_{2}},\ldots,A^{N}=a^{N}_{i_{ N}}\right)\), or more compactly written as \(P\left(y_{l}|a^{1}_{i_{1}},a^{2}_{i_{2}},\ldots,a^{N}_{i_{N}}\right)\)1.2 Footnote 1: All the probability is formulated compactly in this paper. Footnote 2: Reading symbols see Appendix E. \[\text{Split shape}:=\{M_{1},M_{2},\ldots,M_{N}\} \tag{1}\] \[\text{Number of model output neurons}:=\sum_{j=1}^{N}M_{j} \tag{2}\] \[\text{Number of joint sample points}:=\prod_{j=1}^{N}M_{j} \tag{3}\] ### Indeterminate Probability Theory In classical probability theory, given a sample \(x_{k}\) (perform an experiment), its event or joint event has only two states: happened or not happened. However, for IPNN, the model \begin{table} \begin{tabular}{c c c} \hline \hline \(\sum_{A}\left(\frac{\#(A,X)}{\#X}\cdot\frac{\#(Y,A)}{\#A}\right)\) & \(Y=hd\) & \(Y=tl\) \\ \hline \(X=hd=x_{11}\) & \(\frac{4}{5}\cdot\frac{4}{4}+\frac{1}{5}\cdot\frac{1}{6}\) & \(\frac{4}{5}\cdot 0+\frac{1}{5}\cdot\frac{5}{6}\) \\ \(X=tl=x_{11}\) & \(0\cdot\frac{4}{4}+\frac{5}{5}\cdot\frac{1}{6}\) & \(0\cdot 0+\frac{5}{5}\cdot\frac{5}{6}\) \\ \hline \hline \end{tabular} \end{table} Table 5: Results of inference phase: \(P^{A}(Y|X)\) Figure 2: IPNN – model architecture. With the increase of number of random variables, the joint sample points increases exponentially, see Equation (3), but \(P\left(y_{l}|a^{1}_{i_{1}},a^{2}_{i_{2}},\ldots,a^{N}_{i_{N}}\right)\) is statistically calculated, not model weights. only outputs the probability of an event and its state is indeterminate, that's why this paper is called IPNN. This difference makes the calculation of probability (especially joint probability) also different. Equation (4) and Equation (5) will later formulate this difference. Given an input sample \(x_{k}\), using Assumption 3.1 the model outputs can be formulated as: \[P\left(a_{i_{j}}^{j}\mid x_{k}\right)=\alpha_{i_{j}}^{j}(k) \tag{4}\] **Assumption 3.1**.: Given an input sample \(X=x_{k}\), **IF**\(\sum_{i_{j}=1}^{M_{j}}\alpha_{i_{j}}^{j}(k)=1\) and \(\alpha_{i_{j}}^{j}(k)\in[0,1],k=1,2,\ldots,n\). **THEN**, \(\left\{a_{1}^{j},a_{2}^{j},\ldots,a_{M_{j}}^{j}\right\}\) can be regarded as collectively exhaustive and exclusive events set, they are partitions of the sample space of random variable \(A^{j},j=1,2,\ldots,N\). In classical probability situation, \(\alpha_{i_{j}}^{j}(k)\in\{0,1\}\), which indicates the state of event is 0 or 1. For joint event, given \(x_{k}\), using Assumption 3.2 and Equation (4), the joint probability is formulated as: \[P\left(a_{i_{1}}^{1},a_{i_{2}}^{2},\ldots,a_{i_{N}}^{N}\mid x_{k}\right)=\prod _{j=1}^{N}\alpha_{i_{j}}^{j}(k) \tag{5}\] **Assumption 3.2**.: Given an input sample \(X=x_{k}\), \(A^{1},A^{2},\ldots,A^{N}\) is mutually independent. Where it can be easily proved, \[\sum_{\mathbb{A}}\left(\prod_{j=1}^{N}\alpha_{i_{j}}^{j}(k)\right)=1,k=1,2, \ldots,n. \tag{6}\] In classical probability situation, \(\prod_{j=1}^{N}\alpha_{i_{j}}^{j}(k)\in\{0,1\}\), which indicates the state of joint event is 0 or 1. Equation (4) and Equation (5) describes the uncertainty of the state of event \(\left(A^{j}=a_{i_{j}}^{j}\right)\) and joint event \(\left(A^{1}=a_{i_{1}}^{1},A^{2}=a_{i_{2}}^{2},\ldots,A^{N}=a_{i_{N}}^{N}\right)\). ### Observation Phase In observation phase, the relationship between all random variables \(A^{1},A^{2},\ldots,A^{N}\) and \(Y\) is established after the whole observations, it is formulated as: \[P\left(y_{l}\mid a_{i_{1}}^{1},a_{i_{2}}^{2},\ldots,a_{i_{N}}^{N}\right)=\frac {P\left(y_{l},a_{i_{1}}^{1},a_{i_{2}}^{2},\ldots,a_{i_{N}}^{N}\right)}{P\left( a_{i_{1}}^{1},a_{i_{2}}^{2},\ldots,a_{i_{N}}^{N}\right)} \tag{7}\] Because the state of joint event is not determinate in IPNN, we cannot count its occurrence like classical probability. Hence, the joint probability is calculated according to total probability theorem over all samples \(X=(x_{1},x_{2},\ldots,x_{n})\), and with Equation (5) we have: \[P\left(a_{i_{1}}^{1},a_{i_{2}}^{2},\ldots,a_{i_{N}}^{N}\right) \tag{8}\] \[\quad=\sum_{k=1}^{n}\left(P\left(a_{i_{1}}^{1},a_{i_{2}}^{2}, \ldots,a_{i_{N}}^{N}\mid x_{k}\right)\cdot P(x_{k})\right)\] \[\quad=\sum_{k=1}^{n}\left(\prod_{j=1}^{N}P\left(a_{i_{j}}^{j}\mid x _{k}\right)\cdot P(x_{k})\right)\] \[\quad=\frac{\sum_{k=1}^{n}\left(\prod_{j=1}^{N}\alpha_{i_{j}}^{j} (k)\right)}{n}\] Because \(Y=y_{l}\) is sample label and \(A^{j}=a_{i_{j}}^{j}\) comes from model, it means \(A^{j}\) and Y come from different observer, so we can have Assumption 3.3 (see Figure 3). **Assumption 3.3**.: Given an input sample \(X=x_{k}\), \(A^{j}\) and Y is mutually independent in observation phase, \(j=1,2,\ldots,N\). Therefore, according to total probability theorem, Equation (5) and the above assumption, we derive: \[P\left(y_{l},a_{i_{1}}^{1},a_{i_{2}}^{2},\ldots,a_{i_{N}}^{N}\right) \tag{9}\] \[\quad=\sum_{k=1}^{n}\left(P\left(y_{l},a_{i_{1}}^{1},a_{i_{2}}^{2 },\ldots,a_{i_{N}}^{N}\mid x_{k}\right)\cdot P(x_{k})\right)\] \[\quad=\sum_{k=1}^{n}\left(P\left(y_{l}\mid x_{k}\right)\cdot\prod _{j=1}^{N}P\left(a_{i_{j}}^{j}\mid x_{k}\right)\cdot P(x_{k})\right)\] \[\quad=\frac{\sum_{k=1}^{n}\left(y_{l}(k)\cdot\prod_{j=1}^{N} \alpha_{i_{j}}^{j}(k)\right)}{n}\] Substitute Equation (8) and Equation (9) into Equation (7), we have: \[P\left(y_{l}|a_{i_{1}}^{1},a_{i_{2}}^{2},\ldots,a_{i_{N}}^{N}\right)=\frac{ \sum_{k=1}^{n}\left(y_{l}(k)\cdot\prod_{j=1}^{N}\alpha_{i_{j}}^{j}(k)\right)} {\sum_{k=1}^{n}\left(\prod_{j=1}^{N}\alpha_{i_{j}}^{j}(k)\right)} \tag{10}\] Where it can be proved, \[\sum_{l=1}^{m}P\left(y_{l}\mid a_{i_{1}}^{1},a_{i_{2}}^{2},\ldots,a_{i_{N}}^{N} \right)=1 \tag{11}\] Figure 3: Independence illustration of observation phase with Bayesian network ### Inference Phase Given \(A^{j}\), with Equation (10) (passed experience) label \(y_{l}\) can be inferred, this inferred \(y_{l}\) has no pointing to any specific sample \(x_{k}\), incl. also new input sample \(x_{n+1}\), see Figure 4. So we can have following assumption: **Assumption 3.4**.: Given \(A^{j}\), \(X\) and \(Y\) is mutually independent in inference phase, \(j=1,2,\ldots,N\). Therefore, given a new input sample \(X=x_{n+1}\), according to total probability theorem over joint sample space \(\left(a_{i_{1}}^{1},a_{i_{2}}^{2},\ldots,a_{i_{N}}^{N}\right)\in\mathbb{A}\), with Assumption 3.4, Equation (5) and Equation (10), we have: \[P^{\mathbb{A}}\left(y_{l}\mid x_{n+1}\right)\] \[=\sum_{\mathbb{A}}\left(P\left(y_{l},a_{i_{1}}^{1},\ldots,a_{i_{N }}^{N}\mid x_{n+1}\right)\right)\] \[=\sum_{\mathbb{A}}\left(P\left(y_{l}\mid a_{i_{1}}^{1},\ldots,a_{ i_{N}}^{N}\right)P\left(a_{i_{1}}^{1},\ldots,a_{i_{N}}^{N}\mid x_{n+1}\right)\right)\] \[=\sum_{\mathbb{A}}\left(\frac{\sum_{k=1}^{n}\left(y_{l}(k)\prod_{ j=1}^{N}\alpha_{i_{j}}^{j}(k)\right)}{\sum_{k=1}^{n}\left(\prod_{j=1}^{N} \alpha_{i_{j}}^{j}(k)\right)}\prod_{j=1}^{N}\alpha_{i_{j}}^{j}(n+1)\right) \tag{12}\] And the maximum posterior is the predicted label of an input sample: \[\hat{y}:=\operatorname*{arg\,max}_{l\in\{1,2,\ldots,m\}}P^{\mathbb{A}}\left(y _{l}\mid x_{n+1}\right) \tag{13}\] ### Discussion Our proposed theory is derived based on three our proposed conditional mutual independency assumptions, see Assumption 3.2 Assumption 3.3 and Assumption 3.4. However, in our opinion, these assumptions can neither be proved nor falsified, and we do not find any exceptions until now. Since this theory can not be mathematically proved, we can only validate it through experiment. Finally, our proposed indeterminate probability theory is an extension of classical probability theory, and classical probability theory is one special case to our theory. More details to understand our theory, see Appendix A. ## 4 Training ### Training Strategy Given an input sample \(x_{t}\) from a mini batch, with a minor modification of Equation (12): \[P^{\mathbb{A}}\left(y_{l}\mid x_{t}\right)\] \[=\sum_{\mathbb{A}}\left(\frac{\sum_{k=b\cdot t_{0}+1}^{b\cdot t_{ 1}}\left(y_{l}(k)\prod_{j=1}^{N}\alpha_{i_{j}}^{j}(k)\right)}{\sum_{k=b\cdot t _{0}+1}^{b\cdot t_{1}}\left(\prod_{j=1}^{N}\alpha_{i_{j}}^{j}(k)\right)}\prod_ {j=1}^{N}\alpha_{i_{j}}^{j}(t)\right)\] \[\approx\sum_{\mathbb{A}}\left(\frac{\max(H+h(t1),\epsilon)}{\max (G+g(t1),\epsilon)}\cdot\prod_{j=1}^{N}\alpha_{i_{j}}^{j}(t)\right) \tag{14}\] Where \(b\) is for batch size, \(t_{0}=\max(0,t_{1}-T)\), \(t_{1}=\left\lceil\frac{t}{b}\right\rceil,t=1,2,\ldots,n\). Hyper-parameter T is for forgetting use, i.e., \(H\) and \(G\) are calculated from the recent T batches. Hyper-parameter T is introduced because at beginning of training phase the calculated result with Equation (10) is not good yet. And the \(\epsilon\) on the denominator is to avoid dividing zero, the \(\epsilon\) on the numerator is to have an initial value of 1. Besides, \[h(t1)=\sum_{k=b\cdot(t_{1}-1)+1}^{b\cdot t_{1}}\left(y_{l}(k) \cdot\prod_{j=1}^{N}\alpha_{i_{j}}^{j}(k)\right) \tag{15}\] \[g(t1)=\sum_{k=b\cdot(t_{1}-1)+1}^{b\cdot t_{1}}\left(\prod_{j=1} ^{N}\alpha_{i_{j}}^{j}(k)\right)\] (16) \[H=\sum_{k=\max(1,t_{1}-T)}^{t_{1}-1}h(k),\text{for }t_{1}=2,3,\ldots\] (17) \[G=\sum_{k=\max(1,t_{1}-T)}^{t_{1}-1}g(k),\text{for }t_{1}=2,3,\ldots \tag{18}\] Where \(H\) and \(G\) are not needed for gradient updating during back-propagation. We use cross entropy as loss function: \[\mathcal{L}=-\sum_{l=1}^{m}\left(y_{l}(k)\cdot\log P^{\mathbb{A}}\left(y_{l} \mid x_{t}\right)\right) \tag{19}\] The detailed algorithm implementation is shown in Algorithm 1. With Equation (14) we can get that \(P^{\mathbb{A}}\left(y_{l}\mid x_{1}\right)=1\) for the first input sample if \(y_{l}\) is the ground truth and batch size is 1. Therefore, for IPNN the loss may increase at the beginning and fall back again while training. Figure 4: Independence illustration of inference phase with Bayesian network ### Multi-degree Classification (Optional) In IPNN, the model outputs N different random variables \(A^{1},A^{2},\ldots,A^{N}\), if we use part of them to form sub-joint sample spaces, we are able of doing sub classification task, the sub-joint spaces are defined as \(\Lambda^{1}\subset\mathbb{A},\Lambda^{2}\subset\mathbb{A},\ldots\) The number of sub-joint sample spaces is: \[\sum_{j=1}^{N}{N\choose j}=\sum_{j=1}^{N}\left(\frac{N!}{j!(N-j)!}\right) \tag{20}\] If the input samples are additionally labeled for part of sub-joint sample spaces3, defined as \(Y^{\tau}\in\{y_{1}^{\tau},y_{2}^{\tau},\ldots,y_{m^{\tau}}^{\tau}\}\). The sub classification task can be represented as \(\left\langle X,\Lambda^{1},Y^{1}\right\rangle,\left\langle X,\Lambda^{2},Y^{2 }\right\rangle,\ldots\) With Equation (19) we have, Footnote 3: It is labelling of input samples, not sub-joint sample points. \[\mathcal{L}^{\tau}=-\sum_{l=1}^{m^{\tau}}\left(y_{l}^{\tau}(k)\cdot\log P^{ \Lambda^{\tau}}\left(y_{l}^{\tau}\mid x_{t}\right)\right),\tau=1,2,\ldots \tag{21}\] Together with the main loss, the overall loss is \(\mathcal{L}+\mathcal{L}^{1}+\mathcal{L}^{2}+\dots\) In this way, we can perform multi-degree classification task. The additional labels can guide the convergence of the joint sample spaces and speed up the training process, as discussed later in Section 5.2. ### Multi-degree Unsupervised Clustering If there are no additional labels for the sub-joint sample spaces, the model are actually doing unsupervised clustering while training. And every sub-joint sample space describes one kind of clustering result, we have Equation (20) number of clustering situations in total. ### Designation of Joint Sample Space As in Appendix B proved, we have following proposition: **Proposition 4.1**.: _IPNN converges to global minimum only when \(P\left(y_{l}|a_{1_{1}}^{1},a_{n_{2}}^{2},\ldots,a_{n_{N}}^{N}\right)=1,\) for \(\prod_{j=1}^{N}\alpha_{i_{j}}^{j}(t)>0,i_{j}=1,2,\ldots,M_{j}\). In other word, each joint sample point corresponds to an unique category. However, a category can correspond to one or more joint sample points._ **Corollary 4.2**.: _The necessary condition of achieving the global minimum is when the split shape defined in Equation (1) satisfies: \(\prod_{j=1}^{N}M_{j}\geq m\), where \(m\) is the number of classes. That is, for a classification task, the number of all joint sample points is greater than the classification classes._ Besides, the unsupervised clustering (Section 4.3) depends on the input sample distributions, the split shape shall not violate from multi-degree clustering. For example, if the main attributes of one dataset shows three different colors, and your split shape is \(\{M_{1}=2,M_{2}=2,\dots\}\), this will hinder the unsupervised clustering, in this case, the shape of one random variable is better set to 3. And as in Appendix C also analyzed, there are two local minimum situations, improper split shape will make IPNN go to local minimum. In addition, the latter part from Proposition 4.1 also implies that IPNN may be able of doing further unsupervised classification task, this is beyond the scope of this discussion. ## 5 Experiments and Results To evaluate the effectiveness of the proposed approach, we conducted experiments on MNIST (Deng, 2012) and a self-designed toy dataset. ### Unsupervised Clustering As in Section 4.3 discussed, IPNN is able of performing unsupervised clustering, we evaluate it on MNIST. The split shape is set to \(\{M_{1}=2,M_{2}=10\}\), it means we have two random variables, and the first random variable is used to divide MNIST labels \(0,1,\ldots 9\) into two clusters. The cluster results is shown in Figure 5. We find only when \(\epsilon\) in Equation (14) is set to a relative high value that IPNN prefers to put number 1,4,7,9 into one cluster and the rest into another cluster, otherwise, the clustering results is always different for each round training. The reason is unknown, our intuition is that high \(\epsilon\) makes that each category catch the free joint sample point more harder, categories have similar attributes together will be more possible to catch the free joint sample point. \[\frac{1}{round}\cdot\sum_{i=1}^{round}\frac{\text{number of samples with label $l$}}{\text{in one cluster at $i^{th}$ round}} \tag{22}\] ### Avoiding Local Minimum with Multi-degree Classification Another experiment is designed by us to check the performance of multi-degree classification (see Section 4.2): classification of binary vector into decimal value. The binary vector is the model inputs from '00000000000' to '111111111111', which are labeled from 0 to 4095. The split shape is set to \(\{M_{1}=2,M_{2}=2,\ldots,M_{12}=2\}\), which is exactly able of making a full classification. Besides, model weights are initialized as uniform distribution of \([-0.3,0.3]\), as discussed in Appendix C. The result is shown in Figure 6, IPNN without multi degree classification goes to local minimum with only \(69.5\%\) train accuracy. We have only additionally labeled for 12 sub-joint spaces, and IPNN goes to global minimum with \(100\%\) train accuracy. Therefore, with only \(\sum_{1}^{12}2=24\) output nodes, IPNN can classify 4096 categories. Theoretically, if model with 100 output nodes are split into 10 equal parts, it can classify 10 billion categories. Hence, compared with the classification model with only one'softmax' function, IPNN has no computationally expensive problems (see Section 1). ### Hyper-parameter Analysis IPNN has two import hyper-parameters: split shape and forget number T. In this section, we have analyzed it with test on MNIST, batch size is set to 64, \(\epsilon=10^{-6}\). As shown in Figure 7, if the number of joint sample points (see Equation (3)) is smaller than 10, IPNN is not able of making a full classification and its test accuracy is proportional to number of joint sample points, as number of joint sample points increases over 10, IPNN goes to global minimum for both 3 cases, this result is consistent with our analysis. However, we have exceptions, the accuracy of split shape with \(\{M_{1}=2,M_{2}=5\}\) and \(\{M_{1}=2,M_{2}=6\}\) is not high. From Figure 5 we know that for the first random variable, IPNN sometimes tends to put number 1,4,7,9 into one cluster and the rest into another cluster, so this cluster result request that the split shape need to be set minimums to \(\{M_{1}=2,M_{2}\geq 6\}\) in order to have enough free joint sample points. That's why the accuracy of split shape with \(\{M_{1}=2,M_{2}=5\}\) is not high. (For \(\{M_{1}=2,M_{2}=6\}\) case, only three numbers are in one cluster.) Another test in Figure 8 shows that IPNN will go to local minimum as forget number T increases and cannot go to global minimum without further actions, hence, a relative small forget number T shall be found with try and error. ## 6 Conclusion For a classification task, we proposed an approach to extract the attributes of input samples as random variables, and these variables are used to form a large joint sample space. After IPNN converges to global minimum, each joint sample point will correspond to an unique category, as discussed in Proposition 4.1. As the joint sample space increases Figure 5: Unsupervised clustering results on MNIST: \(\epsilon=2\), batch size \(b=64\), forget number \(T=5\), epoch is 5 per round. The test was repeated for 876 rounds with same configuration (different random seeds) in order to check the stability of clustering performance, each round clustering result is aligned using Jaccard similarity (Raff and Nicholas, 2017), the percentage is calculated with Equation (22). Figure 6: Loss of multi-degree classification of ‘binary to decimal’ on train dataset. Input samples are additionally labeled with \(Y^{i}\in\{0,1\}\) for \(i^{th}\) bit is 0 or 1, respectively. \(Y^{i}\) corresponds to sub-joint sample space \(\Lambda^{i}\) with split shape \(\{M_{i}=2\},i=1,2,\ldots 12\). Batch size is 4096, forget number \(T=5,\ \ \epsilon=10^{-6}\). exponentially, the classification capability of IPNN will increase accordingly. We can then use the advantages of classical probability theory, for example, for very large joint sample space, we can use the Bayesian network approach or mutual independence among variables (see Appendix D) to simplify the model and improve the inference efficiency, in this way, a more complex Bayesian network could be built for more complex reasoning task. ## Acknowledgment Thanks to Mr. Su Jianlin for his good introduction of VAE model4, which motivates the implementation of this idea. Footnote 4: Website: [https://kexue.fm/archives/5253](https://kexue.fm/archives/5253)
2310.16375
DyExplainer: Explainable Dynamic Graph Neural Networks
Graph Neural Networks (GNNs) resurge as a trending research subject owing to their impressive ability to capture representations from graph-structured data. However, the black-box nature of GNNs presents a significant challenge in terms of comprehending and trusting these models, thereby limiting their practical applications in mission-critical scenarios. Although there has been substantial progress in the field of explaining GNNs in recent years, the majority of these studies are centered on static graphs, leaving the explanation of dynamic GNNs largely unexplored. Dynamic GNNs, with their ever-evolving graph structures, pose a unique challenge and require additional efforts to effectively capture temporal dependencies and structural relationships. To address this challenge, we present DyExplainer, a novel approach to explaining dynamic GNNs on the fly. DyExplainer trains a dynamic GNN backbone to extract representations of the graph at each snapshot, while simultaneously exploring structural relationships and temporal dependencies through a sparse attention technique. To preserve the desired properties of the explanation, such as structural consistency and temporal continuity, we augment our approach with contrastive learning techniques to provide priori-guided regularization. To model longer-term temporal dependencies, we develop a buffer-based live-updating scheme for training. The results of our extensive experiments on various datasets demonstrate the superiority of DyExplainer, not only providing faithful explainability of the model predictions but also significantly improving the model prediction accuracy, as evidenced in the link prediction task.
Tianchun Wang, Dongsheng Luo, Wei Cheng, Haifeng Chen, Xiang Zhang
2023-10-25T05:26:33Z
http://arxiv.org/abs/2310.16375v1
# DyExplainer: Explainable Dynamic Graph Neural Networks ###### Abstract. Graph Neural Networks (GNNs) resurge as a trending research subject owing to their impressive ability to capture representations from graph-structured data. However, the black-box nature of GNNs presents a significant challenge in terms of comprehending and trusting these models, thereby limiting their practical applications in mission-critical scenarios. Although there has been substantial progress in the field of explaining GNNs in recent years, the majority of these studies are centered on static graphs, leaving the explanation of dynamic GNNs largely unexplored. Dynamic GNNs, with their ever-evolving graph structures, pose a unique challenge and require additional efforts to effectively capture temporal dependencies and structural relationships. To address this challenge, we present DyExplainer, a novel approach to explaining dynamic GNNs on the fly. DyExplainer trains a dynamic GNN backbone to extract representations of the graph at each snapshot, while simultaneously exploring structural relationships and temporal dependencies through a sparse attention technique. To preserve the desired properties of the explanation, such as structural consistency and temporal continuity, we augment our approach with contrastive learning techniques to provide priori-guided regularization. To model longer-term temporal dependencies, we develop a buffer-based live-updating scheme for training. The results of our extensive experiments on various datasets demonstrate the superiority of DyExplainer, not only providing faithful explainability of the model predictions but also significantly improving the model prediction accuracy, as evidenced in the link prediction task. + Footnote †: journal: Acm LSRB 978, 1-4509-XXXXX-X/18/06... $15.00 + Footnote †: journal: Acm LSRB 978, 1-4509-XXXXX-X/18/06... $15.00 + Footnote †: journal: Acm LSRB 978, 1-4509-XXXXX-X/18/06... $15.00 + Footnote †: journal: Acm LSRB 978, 1-4509-XXXXX-X/18/06... $15.00 + Footnote †: journal: Acm LSRB 978, 1-4509-XXXXX-X/18/06... $15.00 + Footnote †: journal: Acm LSRB 978, 1-4509-XXXXX-X/18/06... $15.00 + Footnote †: journal: Acm LSRB 978, 1-4509-XXXXX-X/18/06... $15.00 + Footnote †: journal: Acm LSRB 978, 1-4509-XXXXX-X/18/06... $15.00 + Footnote †: journal: Acm LSRB 978, 1-4509-XXXXX-X/18/06... $15.00 + Footnote †: journal: Acm LSRB 978, 1-4509-XXXXX-X/18/06... $15.00 + Footnote †: journal: Acm LSRB 978, 1-4509-XXXXX-X/18/06... $15.00 + Footnote †: journal: Acm LSRB 978, 1-4509-XXXXX-X/18/06... $15.00 + Footnote †: journal: Acm LSRB 978, 1-4509-XXXXX-X/18/06... $15.00 + Footnote †: journal: Acm LSRB 978, 1-4509-XXXXX-X/18/06... $15.00 + Footnote †: journal: Acm LSRB 978, 1-4509-XXXXX-X/18/06... $15.00 ## 1. Introduction The advent of Graph Neural Networks (GNNs) has caused a veritable revolution in the field, and has been embraced with great enthusiasm due to their demonstrated efficacy in a variety of applications, ranging from node classification (K at each snapshot to inform the node representations, while the latter accounts for the temporal dependencies between representations generated by long-term snapshots. Despite the proliferation of dynamic GNNs [32, 46], most of these approaches deduce the representation at each snapshot \(t\) solely based on the embedding at the previous snapshot \((t-1)\), thereby making it challenging to fully comprehend the intricate, long-term temporal dependencies between snapshots in real-world applications. For instance, social network connections between individuals, groups, and communities may be subject to change over time, influenced by a multitude of factors, such as geographical distance, personal life stage, and evolving interests. To encompass longer-term temporal dependencies, we have devised a buffer-based, live-updating scheme with temporal attention. Specifically, the temporal attention aggregates the node embeddings from the preceding \(B\) snapshots stored in a buffer. In this regard, prior works on dynamic GNNs are reduced to a special case of the DyExplainer, where \(B=1\). The proposed framework constitutes a generalization that encompasses all recent graph learning methods for dynamic graphs. For example, by relinquishing explainability and incorporating the Markov chain property for temporal evolution, our framework degenerates to ROLAND [46]. Additionally, by restricting temporal aggregations to only the final layer, our framework degenerates to a common approach in which a sequence model, such as GRU [2], is placed on top of GNNs [30, 40, 47]. The dual sparse attention components of DyExplainer serve a trifold function. Firstly, they impart incisive explanations for the model's predictions in downstream tasks. Secondly, they serve as a sparse regularizer, enhancing the learning process. As research in the field of ranshomon set theory [33, 43] demonstrates, sparse and interpretable models possess a higher degree of robustness and better generalization performance, as outlined in Sec. 4.3. Lastly, the attention components allow for the augmentation of the approach with priori-guided regularization, preserving the desirable properties of the explanation, such as structural consistency and temporal continuity. Structural consistency pertains to the consistency of node embeddings between connected nodes in a single snapshot, while temporal continuity enforces smoothness constraints on the temporal attention between closely spaced snapshots, guided by pre-established human priors. To achieve more lucid explanations, we employ the use of contrastive learning techniques, treating connected pairs as positive examples and unconnected pairs as negative examples for consistency regularization, and recent snapshots as positive examples and distant historical snapshots as negative examples for continuity regularization. Overall, our contributions are summarized as follows: * We put forward the problem of explainable dynamic GNNs and propose a general DyExplainer to tackle it. As far as we know, it is the first work solving this problem. * DyExplainer seamlessly integrates the modeling of both structural relationships and long-term dependencies via sparse attentions. As a result, it is capable of providing real-time explanations for both the structural and temporal factors that influence the model's predictions. This innovative explaining module has been designed to be encoder-agnostic, thereby affording flexibility to a range of dynamic GNN backbones. Its implementation requires minimal overhead, as it only entails adding a lightweight parameterization to the encoder for its explanation modules. This makes DyExplainer highly efficient even for large backbone networks. * We propose two contrastive regularizations to provide consistency and continuity explanations. Our approach to augmenting desired properties in the explanation is a fresh contribution to the field and may be of independent interest. * Extensive experiments on various datasets demonstrate the superiority of DyExplainer, not only providing faithful explainability of the model predictions but also significantly improving the model prediction accuracy, as evidenced in the link prediction task. Figure 1. DyExplainer for dynamic graphs. \(H^{(t)}\) is the node representation for snapshot \(G^{(t)}\) after backbone module. \(H^{\prime(t)}\) is the updated node representation after explainable module. \(\widehat{A}^{(t)}\) is the structural attention for snapshot \(G^{(t)}\). ## 2. Problem Definition We formulate a dynamic graph as a series of \(T\) attributed graphs as \(\mathcal{G}=(G^{(1)},...,G^{(T)})\), where each graph \(G^{(t)}=(\mathcal{V},\mathcal{E}^{(t)},X^{(t)})\) is a snapshot at time step \(t\). \(\mathcal{V}=\{v_{1},v2,...v_{N}\}\) is the set of \(N\) nodes shared by all snapshots and \(\mathcal{E}^{(t)}\in\mathcal{V}\times\mathcal{V}\) is the set of edges of a graph \(G^{(t)}\). \(X^{(t)}\in\mathbb{R}^{N\times D}\) is the node feature matrix at snapshot \(t\), where \(D\) is the number of dimensions of node features. The topology and node features of each snapshot are dynamically changing over time. We aim to learn an explainable dynamic GNN model \(f\) from the long-term snapshots defined as: \[\widehat{\mathbf{A}}^{(t-B+1)},...,\widehat{\mathbf{A}}^{(t)},\mathbf{H}^{\prime(t+1)} \gets f\left(G^{(t-B+1)},...,G^{(t)}\right), \tag{1}\] where \(\widehat{\mathbf{A}}^{(t-B+1)},...,\widehat{\mathbf{A}}^{(t)}\) are explanations we are interested and \(\mathbf{H}^{\prime(t+1)}\) is the future node representation predicted for the downstream tasks. In our study, we examine its application for link prediction. It's worth noting that our framework can also easily support other downstream tasks. ## 3. Method ### Encoder Backbone There are different approaches to designing dynamic GNNs. As a plug-and-use framework, the proposed DyExplainer is flexible to the choice of dynamic GNNs as the backbone. In this work, we adopt a state-of-the-art method ROLAND (Roland, 2017) as the backbone due to its powerful expressiveness and impressive empirical performances in real-life datasets. Specifically, it hierarchically updates the node embedding to obtain the \(\mathbf{H}^{(t)}\) at snapshot \(t\). With ROLAND as the backbone, the architecture of the encoder is shown in Figure 2. We put the node embedding inferred by the backbone to a buffer of size \(B\) for the learning of our explainer module. Formally, we denote the node embeddings in the buffer as \(\{\mathbf{H}^{(t-B+1)},...,\mathbf{H}^{(t)}\}\). ### Explainable Aggregations The node embedding at snapshot \(t\) is denoted by \(\mathbf{H}^{(t)}=\{\mathbf{h}_{1},...,\mathbf{h}_{N}\}\), \(\mathbf{h}_{i}\in\mathbb{R}^{F}\), where \(N\) is the number of nodes, and \(F\) is the feature dimension of the node embedding. #### 3.2.1. Structural Aggregation On dynamic graphs, each snapshot has its unique topology information. An effective explanation, therefore, should highlight the crucial structural components at a given snapshot that significantly contribute to the model's prediction. To this end, inspired by the idea in (Srivastava et al., 2017), we propose structural attention to aggregate the weighted neighborhoods at each time step. Formally, we have \[\omega_{ij}^{(s,t)}=\text{LeakyReLU}\Big{(}\mathbf{a}^{(s)T}\Big{[} \mathbf{W}^{(s)}\mathbf{h}_{i}^{(t)}||\mathbf{W}^{(s)}\mathbf{h}_{j}^{(t)}\Big{]}\Big{)}, \tag{2}\] \[\widehat{\mathbf{A}}_{ij}^{(t)}=\text{softmax}_{j}\Big{(}\omega_{ij}^{(s,t)} \Big{)}=\frac{\exp\Big{(}\omega_{ij}^{(s,t)}\Big{)}}{\sum_{\mathbf{k}\in\mathcal{ N}_{i}^{(t)}}\exp\Big{(}\omega_{ik}^{(s,t)}\Big{)}}, \tag{3}\] where \(\widehat{\mathbf{A}}^{(t)}\) is the structural attention at time step \(t\), \(\mathbf{W}^{(s)}\) is a weight matrix in a shared linear transformation, \(\mathbf{a}^{(s)}\) is the weight vector in the single-layer network, \(\mathcal{N}_{i}^{(t)}\) is the set of neighbors of node \(i\) at time step \(t\), and \(||\) indicates the concatenation operation. However, the utilization of weights in traditional attention models may pose a challenge in complex dynamic graph environments, particularly with regard to explanation. Explanations in these settings are often derived by imposing a threshold and disregarding insignificant attention weights. This approach, however, fails to account for the cumulative impact of numerous small but non-zero weights, which can be substantial. Moreover, the non-exclusivity of attention weights raises questions about their accuracy in reflecting the true underlying importance (Srivastava et al., 2017). To address this issue and equip structural attention with better explainability, we design a hard attention mechanism to alleviate the effects of small attention coefficients. The basic idea uses a prior work on differentiable sampling (Golovolov et al., 2015; Golovolov et al., 2015), which states that the random variable \[\epsilon=\sigma\bigg{(}\Big{(}\log\epsilon-\log\big{(}1-\epsilon\big{)}+ \omega\Big{)}/\tau\bigg{)}\quad\text{where}\quad\epsilon\sim\text{Uniform}(0,1), \tag{4}\] where \(\sigma(\cdot)\) is the sigmoid function and \(\tau\) is the temperature controlling the approximation. Equation 4 follows a distribution that converges to a Bernoulli distribution with success probability \(p=(1+e^{-\omega})^{-1}\) as \(\tau>0\) tends to zero. Hence, if we parameterize \(\omega\) and specify that the presence of an edge between a pair of nodes has probability \(p\), then using \(\epsilon\) computed from equation 4 to fill the corresponding entry of \(\widehat{\mathbf{A}}\) will produce a matrix \(\widehat{\mathbf{A}}\) that is close to binary. We use this matrix as the hard attention with the hope of obtaining a better explanation due to the dropping of small attention weights. Moreover, because equation 4 is differentiable with respect to \(\omega\), we can train the parameters of \(\omega\) like in a usual gradient-based training. In the structural attention, we have the parameterized \(\omega_{ij}^{(s,t)}\) in Equation 2, thus we get \[\widehat{e}_{ij}^{(s,t)}=\sigma\bigg{(}\Big{(}\log\epsilon-\log\big{(}1- \epsilon\big{)}+\omega_{ij}^{(s,t)}\Big{)}/\tau\bigg{)}\quad\text{where}\quad \epsilon\sim\text{Uniform}(0,1),\] which returns an approximate Bernoulli sample for the edge \((i,j)\). When \(\tau\) is not sufficiently close to zero, this sample may not be close enough to binary, and in particular, it is strictly nonzero. The rationality of such an approximation is that with temperature \(\tau>0\), the gradient \(\frac{\partial\widehat{e}_{ij}^{(s,t)}}{\partial\omega_{ij}^{(s,t)}}\) is well-defined. The output of the binary concrete distribution is in the range of (0,1). To further alleviate the effects of small values by encouraging them to be exactly zero, we propose a "stretching and clipping" technique in the hard attention mechanism. To explicitly zero out an edge, we follow (Koland, 2017) and introduce two parameters, \(\gamma<0\) and \(\xi>1\), to remove small values Figure 2. Backbone encoder architecture. of \(\widetilde{\mathbf{u}}_{ij}^{(s,t)}\) given by \[\widetilde{\mathbf{A}}_{ij}^{(t)}=\min\Big{(}1,\max\big{(}e_{ij}^{(s,t)},0\big{)} \Big{)}\quad\text{where}\quad e_{ij}^{(s,t)}=\widetilde{\mathbf{u}}_{ij}^{(s,t)}( \xi-\gamma)+\gamma.\] The structural attention \(\widehat{\mathbf{A}}^{(t)}\) does not insert new edges to the graph (i.e., when \((i,j)\notin\mathcal{E}^{(t)},\widetilde{\mathbf{A}}_{ij}^{(t)}=0\)), but only removes (denoises) some edges \((i,j)\) originally in \(\mathcal{E}^{(t)}\). Then, we obtain the node embedding \(\widetilde{\mathbf{H}}^{(t)}\in\mathbb{R}^{N\times F}\) at the time step \(t\) after the structural aggregation with each row given by \[\widetilde{\mathbf{h}}_{l}^{(t)}=\sigma\Big{(}\sum_{j\in\mathcal{N}_{l}^{(t)}} \widetilde{\mathbf{A}}_{ij}^{(t)}\mathbf{W}^{(s)}\mathbf{h}_{j}^{(t)}\Big{)}. \tag{5}\] #### 3.2.2. Temporal Aggregation In light of the dynamic and evolving nature of node features and inter-node relationships over time, temporal dependency holds paramount importance in the modeling of dynamic graphs. Existing methods either adopt RNN architectures, such as GRU (He et al., 2017) and LSTM (Hochreiter et al., 2015) or assume the Markov chain property (Srivastava et al., 2017) to capture the temporal dependencies in dynamic graphs. As shown in (Srivastava et al., 2017), despite their utility, these methods are insufficient in capturing long-range dependencies, thereby hindering their ability to generalize and model previously unseen graphs. To overcome this limitation, we propose a solution that leverages an attention-based temporal aggregation mechanism to adaptively integrate node embeddings from distant snapshots. This is achieved through the utilization of a buffer-dependent temporal mask, which serves as a temporal topology to guide the aggregation process. In Equation 5, the structural attention provides the node embedding \(\widetilde{\mathbf{H}}^{(t)}\in\mathbb{R}^{N\times F}\) for each snapshot \(t\). Therefore, we have a set of node embedding \(\{\widetilde{\mathbf{H}}^{(t-B+1)},...,\widetilde{\mathbf{H}}^{(t)}\}\). Concatenating them to a 3-dimensional tensor and take transpose, for each node \(i\), we have a buffer-dependent node embedding \(\widetilde{\mathbf{H}}^{(t)}\in\mathbb{R}^{B\times F}\), \(\widehat{\mathbf{H}}^{(i)}=\{\widetilde{\mathbf{h}}_{t-B+1},...,\widetilde{\mathbf{h}}_{t }\},\widetilde{\mathbf{h}}_{t}\in\mathbb{R}^{F}\). We propose the temporal attention given by \[\widetilde{\mathbf{A}}_{t_{k},t_{j}}^{(t)}=\text{softmax}_{t_{j}} \Big{(}\omega_{t_{k},t_{j}}^{(t)}\Big{)}=\frac{\exp\Big{(}\omega_{t_{k},t_{j}} ^{(t)}\Big{)}}{\sum_{t_{p}\in\mathcal{M}_{k}}\exp\Big{(}\omega_{t_{k},t_{p}}^{ (s,t_{j})}\Big{)}}, \tag{6}\] where \(\widetilde{\mathbf{A}}^{(i)}\) is the temporal attention for node \(i\), \(\mathbf{W}\) is a weight matrix for linear transformation, \(\mathbf{a}\) is a weight vector for single-layer network, \(\mathcal{M}_{k}\) is the time steps that has element 1 in the temporal mask. The values in \(\widetilde{\mathbf{A}}^{(i)}\in\mathbb{R}^{B\times B}\) indicate the importance of relations between the embedding for node \(i\) at the past snapshots. We compute Equation 6 and 7 in batch for acceleration due to the graphs usually have large numbers of nodes. Then, after the temporal attention, we obtain the node embedding \(\mathbf{H}^{(i)}\in\mathbb{R}^{B\times K}\) for each node \(i\) for all the B snapshots in the buffer, with each row given by \[\mathbf{h}_{t_{k}}^{{}^{\prime}(i)}=\sigma\Big{(}\sum_{t_{j}\in\mathcal{M}_{k}} \widetilde{\mathbf{A}}_{t_{k},t_{j}}^{(t)}\mathbf{W}\mathbf{h}_{t_{j}}^{(i)}\Big{)}. \tag{8}\] Therefore, we have the embedding of node \(i\) at time \(t\) is \(\mathbf{h}_{t}^{{}^{\prime}(i)}\in\mathbb{R}^{K}\), and the embedding of all nodes results in \(\mathbf{H}^{{}^{\prime}(t)}\in\mathbb{R}^{N\times K}\), \(\mathbf{H}^{{}^{\prime}(t)}=\{\mathbf{h}_{t}^{(0)},...,\mathbf{h}_{t}^{{}^{\prime}(N-1)}\}\). ### Regularizations The framework of DyExplainer is flexible with various regularization terms to preserve desired properties on the explanation. Inspired by the graph contrastive learning that makes the node representations more discriminative to capture different types of node-level similarity, we propose a structural consistency and a continuity consistency. We now discuss the regularization terms as well as their principles. #### 3.3.1. Consistency Regularization Inspired by the homophily nature of graph-structured data (Kipf and Welling, 2017), we propose a topology-wise regularization to encourage consistent explanations of the connected nodes in a graph. Specifically, on the graph \(G^{(t)}=\{\mathcal{V},\mathcal{E}^{(t)}\}\), for a node \(i\in\mathcal{V}\), we sample a positive pair \((i,p)\), the edge \(e_{i,p}\in\mathcal{E}^{(t)}\). We sample unconnected pairs \((i,j)\) such that \(e_{i,j}\notin\mathcal{E}^{(t)}\) to form a set of non-negative samples \(\widetilde{\mathbf{N}}_{i}\), then we propose the consistency regularization for the structural attention \(\widetilde{\mathbf{A}}^{(t)}\) as \[\mathcal{L}_{cons}=-\log\frac{\exp\left(\text{sim}\Big{(}\widetilde{\mathbf{A}}_{i }^{(t)},\widetilde{\mathbf{A}}_{p}^{(t)}\Big{)}\right)}{\exp\left(\text{sim} \Big{(}\widetilde{\mathbf{A}}_{i}^{(t)},\widetilde{\mathbf{A}}_{p}^{(t)}\Big{)} \right)+\sum_{j\in\widetilde{\mathbf{N}}_{i}}\exp\left(\text{sim}\Big{(} \widetilde{\mathbf{A}}_{i}^{(t)},\widetilde{\mathbf{A}}_{j}^{(t)}\Big{)}\right)}. \tag{9}\] Note that the computation of Equation 9 is very time-consuming for graphs with large numbers of nodes and edges. In practice, we select some anchors to compute the \(\mathcal{L}_{cons}\). #### 3.3.2. Continuity Regularization As suggested in (Kipf and Welling, 2017), preserving continuity ensures the robustness of explanations that small variations applied to the input, for which the model prediction is nearly unchanged, will not lead to large differences in the explanation. In addition, continuity benefits generalizability beyond a particular input instance. Based on the practical principle, in dynamic graphs, we aim to maintain a consistent explanation of snapshots even as the graph structure evolves. Inspired by the idea in (Srivastava et al., 2017) that two close subsequences are considered as a positive pair while the ones with large distances are the negatives, we propose a continuity regularization. For a snapshot \(G^{t}\), a positive pair \((G^{t},G^{p})\) is sampled from the sliding window in the temporal mask. The set of non-negative samples \(\tilde{\mathbf{N}}_{t}\) is historical snapshots that are not in the buffer. Then, the continuity regularization for each node \(i\) is given by \[\mathcal{L}_{cont}=-\log\frac{\exp\left(\text{sim}\Big{(}\widetilde{\mathbf{A}}_{i }^{(t)},\widetilde{\mathbf{A}}_{i}^{(p)}\Big{)}\right)}{\exp\left(\text{sim} \Big{(}\widetilde{\mathbf{A}}_{i}^{(t)},\widetilde{\mathbf{A}}_{i}^{(p)}\Big{)}\right)+ \sum_{k\in\tilde{\mathbf{N}}_{t}}\exp\left(\text{sim}\Big{(}\widetilde{\mathbf{A}}_{i}^{(t )},\widetilde{\mathbf{A}}_{i}^{(k)}\Big{)}\right)}, \tag{10}\] where \(\widetilde{\mathbf{A}}_{i}^{(t)}\in\mathbb{R}^{B\times B}\) is the temporal attention of node \(i\) on \(G^{t}\). To compute Equation 10 for all of the nodes instead of one-by-one, we form a block diagonal matrix with each diagonal block be the attention corresponding to each node, say \(\widetilde{\mathbf{A}}_{\text{block}}^{(t)}=\text{diag}\Big{(}\widetilde{\mathbf{A}}_{0 }^{(t)},...,\widetilde{\mathbf{A}}_{N}^{(t)}\Big{)}\). ### Buffer-based live-update After we obtain the node embedding \(\mathbf{H}^{{}^{\prime}(t)}\in\mathbb{R}^{N\times K}\) for the current snapshot, DyExplainer uses an MLP to predict the probability of a future edge from node \(i\) to \(j\). We compute a cross entropy loss between the predictions and the edge labels at the future snapshot. After all, we have the objective function \[\mathcal{L}=(1-\alpha-\beta)\mathcal{L}_{ce}+\alpha\mathcal{L}_{cons}+\beta \mathcal{L}_{cont}. \tag{11}\] Inspired by ROLAND (Roland, 2018), we develop a buffer-based live-update algorithm to train the model. The key idea is to balance the efficiency and the aggregation from historical embeddings. Specifically, we update the backbone lively and fine-tuning the attention modules with the node embedding from the buffer. Note that the backbone is updated based on cross-entropy loss because without attention module, the terms \(\mathcal{L}_{cons}\) and \(\mathcal{L}_{cont}\) are zeros. We provide the details in Algorithm 1. ``` 1:Input: Dynamic graphs \(\mathcal{G}=(G^{(1)},...,G^{(T)})\), link prediction labels \(y_{1},...y_{T}\), number of snapshots \(T\), maximum fine-tuning epoch \(E\), maximum buffer size \(B\), trade-off parameters \(\alpha\) and \(\beta\), the DyExplainer model. 2:Initialize the node state \(H^{(0)}\) and an empty queue as a buffer with size 0; 3:for\(t=1,...,T-1\)do 4: Train and update the backbone module based on \(y_{t}\) with early stopping and get \(H^{(t)}\); 5:if Buffer size \(<\) B then 6: Insert \(\mathbf{H}^{(t)}\) into the buffer; 7:else 8: Delete \(\mathbf{H}^{(t-B)}\) and insert \(\mathbf{H}^{(t)}\), 9:endif 10:for\(e=1,...,E\)do 11: Input the node embedding stored in the buffer to the structural attention and get \(\hat{\mathbf{H}}^{(t)}\), as described in Section 3.2.1; 12: Compute \(\mathcal{L}_{cons}\) in equation 9; 13: Input the structural node embedding to the temporal attention and get \(\mathbf{H}^{(t)}\), as described in Section 3.2.2; 14: Compute \(\mathcal{L}_{cont}\) in equation 10; 15: Compute \(\mathcal{L}\) in equation 11; 16: Train and Update the attention modules based on \(y_{t}\). 17:endfor 18:endfor ``` **Algorithm 1**Buffer-based live-update algorithm. ## 4. Experiments To evaluate the performance of the DyExplainer, we compare it against state-of-the-art baselines. Our findings demonstrate the effectiveness in model generalization for link predictionFurthermore, we quantitatively validate the accuracy of its explanations. We also delve deeper with ablation studies and case studies, offering a deeper understanding of the proposed method. ### Experimental Setup **Datasets.** The experiments are performed on six widely used data sets in the following list. The data set (1) AS-733 is an autonomous systems dataset of traffic flows among routers comprising the Internet (Kang et al., 2017). (2) Reddit-Title and (3) Reddit-Body are networks of subreddit-to-subreddit hyperlinks extracted from posts. The posts contain hyperlinks that connect one subreddit to another. The edge label shows if the source post expresses negativity towards the target post (Kang et al., 2017). (4) UCI-Message is composed of private communications exchanged on an online social network system among students (Kang et al., 2017). The data sets Bitcoin-OTC and Bitcoin-Alpha consist of "who-trusts-whom" networks of individuals who engage in trading on these platforms (Kang et al., 2017; Kang et al., 2018). **Baselines.** We compare DyExplainer with both link prediction methods and explainable methods. For link prediction methods, we use 7 state-of-the-art dynamic GNNs. The (1) EvolveGCN-H and (2) EvolveGCN-O models employ an RNN to dynamically adapt the weights of internal GNNs, enabling the GNN to change during testing (Kang et al., 2017). (3) T-GCN integrates a GNN into the GRU cell by replacing the linear transformations in GRU with graph convolution operators (Zhu et al., 2017). The (4) GCRN-GRU and (5) GCRN-LSTM methods are widely adopted baselines that are generalized to capture temporal information by incorporating either a GRU or an LSTM layer. GCRN uses a ChebNet (Chen et al., 2017) for spatial information and separate GNNs to compute different gates of RNNs. The (6) GCRN-Baseline first builds node features using a Chebyshev spectral graph convolution layer to capture spatial information, then feeds these features into an LSTM cell to extract temporal information (Zhu et al., 2017). The (7) ROLAND views the node embeddings at different layers in the GNNs as hierarchical node states, which it updates recurrently over time. It integrates advanced design features from static GNNs and enables lively updating. Throughout our experiments, we use the GRU-based ROLAND which is shown to perform better than the others (Roland, 2018). To evaluate the effectiveness of explainability, we compare DyExplainer with GNNExplainer (Zhu et al., 2017) and a gradient-based method (Grad). (1) GNNExplainer is a post-hoc state-of-the-art method providing explanations for every single instance. (2). Grad learns weights of edges by computing gradients of the model's objective function w.r.t. the adjacency matrix. **Metrics.** For measuring link prediction performance, we use the standard mean reciprocal rank (MRR). For evaluating the faithfulness of explainability, we mainly use the Fidelity score of probability (Zhu et al., 2017). We let \(\mathcal{G}^{t}=\{\mathcal{V}_{t},\mathcal{E}_{t}\}\) be the graph at the snapshot \(t\), with \(\mathcal{V}_{t}\) a set of vertices and \(\mathcal{E}_{t}\) a set of edges. For snapshot \(t\), there is a set of edges \(\mathcal{E}^{\prime}_{t}\) needed to make predictions. After training the DyExplainer, we obtain an explanation mask \(\mathbf{M}_{t}\in\{0,1\}^{n\times n}\) for the snapshot \(\mathcal{G}^{t}\), with each element 0 or 1 to indicate whether the corresponding edge is identified as important. According to \(\mathbf{M}_{t}\) we obtain the important edges in \(\mathcal{G}^{t}\) to create a new graph \(\hat{\mathcal{G}}^{t}\). The Fidelity score is computed as \[Fidelity=\frac{1}{\left|\mathcal{E}^{\prime}_{t}\right|}\sum_{i=1}^{\left| \mathcal{E}^{\prime}_{t}\right|}\left|f(\mathcal{G}^{t})_{\mathcal{E}^{\prime}_ {t}}-f(\hat{\mathcal{G}}^{t})_{\mathcal{E}^{\prime}_{t}}\right|, \tag{12}\] where \(\left|\mathcal{E}^{\prime}_{t}\right|\) is the number of edges need to predict, \(f\left(\mathcal{G}^{t}\right)_{\mathcal{E}^{\prime}_{t}}\) means the predicted probability of edge \(\mathcal{e}^{\prime}_{t}\in\mathcal{E}^{\prime}_{t}\). We follow (Zhu et al., 2017; Wang et al., 2019) to compute the Fidelity scores at different sparsity given by \[Sparsity=1-\frac{\left|\mathbf{M}_{t}\right|}{\left|\mathcal{E}_{t}\right|}, \tag{13}\] where \(\left|\mathbf{M}_{t}\right|\) is the number of important edges identified in \(\mathbf{M}_{t}\) and \(\left|\mathcal{E}_{t}\right|\) means the number of edges in \(\mathcal{G}_{t}\). **Implementation details.** To ensure a fair comparison, all methods were trained through live updating as outlined in (Roland, 2018). The evaluation is supposed to happen at each snapshot, however, models training on streaming data with early-stopping usually do not converge in the early epochs. Therefore, we report the average of MRR of the most recent 60% snapshots. The hyper-parameter space of the DyExplainer model is similar to the hyper-parameter space of the underlying static GNN layers. For all methods, the node state hidden dimensions are set to 128 with GNN layers featuring skip connection, sum aggregation, and batch normalization. The DyExplainer is trained for a maximum of 100 epochs in each time step before early stopping, and its explainable module is fine-tuned for several additional epochs. For the attention modules, the slope in LeakyReLU is set to 0.2. We follow the practice in (Chen et al., 2017) to adopt the exponential decay strategy by starting the training with a high temperature of 1.0, and annealing to a small value of 0.1. For each dataset, we follow (Wang et al., 2018) of the parameter settings in the backbone. we use grid search to tune the hyperparameters of DyExplainer. Specifically, we tune: (1) The hidden dimension for structural and temporal attention modules (8, 16); (2) the learning rate for live-update the backbone (0.001 to 0.02); (3) the learning rate for fine-tuning the explainable module (0.001 to 0.1); (4) buffer size of the explainable module (3 to 20); (5) trade-off parameters \(\alpha\) for temporal regularization and \(\beta\) for structural regularization (0 to 1). **Computing environment.** We implemented all models using PyTorch (Krizhevsky et al., 2015), PyTorch Geometric (Abadi et al., 2015), and Scikit-learn (Krizhevsky et al., 2015). All data sets used in the experiments are obtained from (Krizhevsky et al., 2015). We conduct the experiments on a server with four NVIDIA RTX A6000 GPUs (48GB memory each). ### Link Prediction Performance Table 1 showcases the MRR results for all compared methods on all datasets, which were all trained through live updating. The results are obtained by averaging the MRR of the most recent 60% snapshots on the test datasets. The buffer size for DyExplainer was set to 5 across all datasets, and the attention module was fine-tuned for 4 epochs. The ROLAND method outperforms the other baselines on most datasets. DyExplainer surpasses the best baseline on 4 datasets, showing a 7.89% improvement on Bitcoin-Alpha, and performs similarly to the best baseline on the remaining datasets. This effectively demonstrates the Rashomon set theory (Rashomon et al., 2017; Rashomon et al., 2017), and highlights the benefits of seeking a simple and understandable model, which can result in improved robustness and better overall performance. ### Ablation Study To present deep insights into the proposed method, we conducted multiple ablation studies on the Bitcoin-Alpha dataset to empirically verify the effectiveness of the proposed explainable aggregations and the contrastive regularizations. Specifically, we compare DyExplainer with variants. (1) w/o structural attention and w/o temporal attention are the DyExplainer without one of the attention modules in the explainable aggregations. (2) w/o consistency regularization and w/o continuity regularization refers to DyExplainer without one of the contrastive regularization terms in the loss. Results are shown in Figure 3. From the figure, we observe that: (1) the performance of w/o structural attention is much worse than w/o temporal attention; (2) the performance of w/o consistency regularization is much worse than w/o continuity regularization. It is evident from both (1) and (2) that the topological information within a single snapshot holds greater sway over the prediction outcome, as compared to the temporal dependencies that exist between snapshots. (3) The results of the ablation studies performed on regularization and attention further reinforce the superiority of our approach over these alternatives. ### Explanations Performance In order to demonstrate the reliability of the explanations provided by DyExplainer, we conduct quantitative evaluations that compare our approach with various baselines across three datasets characterized by a limited number of edges: UCI-Message, Bitcoin-OTC, and Bitcoin-Alpha. Specifically, we adopt the Fidelity vs. Sparsity \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & AS-733 & Reddit-Title & Reddit-Body & UCI-Message & Bitcoin-OTC & Bitcoin-Alpha \\ \hline EvolveGCN-H & 0.263 \(\pm\) 0.098 & 0.156 \(\pm\) 0.121 & 0.072 \(\pm\) 0.010 & 0.055 \(\pm\) 0.011 & 0.081 \(\pm\) 0.025 & 0.054 \(\pm\) 0.019 \\ EvolveGCN-O & 0.180 \(\pm\) 0.104 & 0.015 \(\pm\) 0.019 & 0.093 \(\pm\) 0.022 & 0.028 \(\pm\) 0.005 & 0.018 \(\pm\) 0.008 & 0.005 \(\pm\) 0.006 \\ GCRN-GRU & **0.337 \(\pm\) 0.001** & 0.328 \(\pm\) 0.005 & 0.204 \(\pm\) 0.005 & 0.095 \(\pm\) 0.013 & 0.163 \(\pm\) 0.005 & 0.143 \(\pm\) 0.004 \\ GCRN-LSTM & 0.335 \(\pm\) 0.001 & 0.343 \(\pm\) 0.006 & 0.209 \(\pm\) 0.003 & **0.107 \(\pm\) 0.004** & 0.172 \(\pm\) 0.013 & 0.146 \(\pm\) 0.008 \\ GCRN-Baseline & 0.321 \(\pm\) 0.002 & 0.342 \(\pm\) 0.004 & 0.202 \(\pm\) 0.002 & 0.090 \(\pm\) 0.011 & 0.176 \(\pm\) 0.005 & **0.152 \(\pm\) 0.005** \\ TGCN & 0.335 \(\pm\) 0.001 & 0.382 \(\pm\) 0.005 & 0.234 \(\pm\) 0.004 & 0.080 \(\pm\) 0.015 & 0.080 \(\pm\) 0.006 & 0.060 \(\pm\) 0.014 \\ ROLAND & 0.330 \(\pm\) 0.004 & **0.384 \(\pm\) 0.013** & **0.342 \(\pm\) 0.008** & 0.090 \(\pm\) 0.010 & **0.189 \(\pm\) 0.008** & 0.147 \(\pm\) 0.006 \\ \hline DyExplainer & **0.341 \(\pm\) 0.000** & **0.383 \(\pm\) 0.002** & **0.335 \(\pm\) 0.010** & **0.109 \(\pm\) 0.004** & **0.194 \(\pm\) 0.002** & **0.164 \(\pm\) 0.002** \\ \hline \hline \end{tabular} \end{table} Table 1. Comparison of MRR for the DyExplainer and the baselines. Standard deviations are obtained by repeating each model training 5 times. For each data set, the two best cases are boldface. Figure 3. Ablation studies on Bitcoin-Alpha. metrics for our evaluation, in accordance with the methodology described in (Srivastava et al., 2017; Wang et al., 2018). The Fidelity metric assesses the accuracy with which the explanations reflect the significance of various factors to the model's predictions, while the Sparsity metric quantifies the proportion of structures that are deemed critical by the explanation techniques. For a fair comparison, for all three methods, we use a trained dynamic GNN (Wang et al., 2018) as the base model to calculate the predicted probability. The baseline method, GNNExplainer, was not originally developed for dynamic graph settings. Nevertheless, for a fair comparison, we assess the Fidelity of both DyExplainer and GNNExplainer using only the graph at the final snapshot, despite the fact that DyExplainer provides explanations for all snapshots stored in a buffer. GNNExplainer identifies the most influential nodes within a k-hop neighborhood and generates a mask to highlight these nodes for a given prediction node. As our DyExplainer provides a global attention mechanism for the entire graph, for a fair comparison, we calculate the average Fidelity of each mask produced by GNNExplainer with respect to all of the edges. To obtain the Fidelity and Sparsity metrics for each mask, we select a subset of the top-ranked edges based on the weights from the explanation mask and use this subset to create a new graph. The Fidelity of an edge is defined as the difference between the predicted probability of that edge on the new graph and the global graph. For DyExplainer, we directly select the top-ranked edges based on the attention weights to form a new graph, as there is only one global attention mechanism that provides the explanation. The comparison of Fidelity and Sparsity is presented in Figure 5. The evaluation of various methods is based on their Fidelity scores under comparable levels of Sparsity, as the Sparsity level cannot be precisely controlled. From the figure, we see that for all three datasets, the Fidelity of these methods increases as Sparsity increases. This is due to the calculation of Fidelity as the difference between the predicted probability of the model on the reduced graph and the original graph, leading to higher Fidelity values with greater Sparsity. Furthermore, GNNExplainer demonstrates superior Fidelity compared to Grad on Bitcoin-OTC and Bitcoin-Alpha, while performing similarly on UCI-Message. Additionally, DyExplainer outperforms both methods on all three datasets at varying Sparsity levels, revealing that it provides more accurate explanations. ### Case Studies To show the evolving and continuity of underlying patterns the DyExplainer detects from the dynamic graph, we visualize the attention values of some edges at different snapshots on the Bitcoin-Alpha data set in Figure 4. From this figure, we observe that the edges \((192,62)\), \((193,295)\), \((203,297)\) and \((232,192)\) has close importance on snapshots 1-3. It indicates the temporal continuity of edge importance exists in dynamic graphs and our intuition is reasonable. The edges \((175,100)\), \((192,62)\), and \((468,462)\) are less important at snapshots 1-3 while becoming more important at snapshot 7. It infers that the importance of an edge is evolving along with the time steps. We further visualize the temporal attention to show which historical snapshot has more influence on the current. The importance of snapshots provided by the temporal attention is shown in Figure 6. From the figure, we see that the most important snapshot to the current is snapshot 32. To see how the patterns have an effect, we randomly pick an edge \((623,26)\) at snapshot 35 to investigate the local structure of this edge at snapshot 32. The visualization of nodes 623 and 26 are shown in Figure 7. Note that these two nodes are unconnected at snapshot 32. The widths of edges are according to the weights of structural attention. Thicker ones mean large weights. From the figure, we can find that there are three clusters of substructures with larger weights, which can be viewed as important patterns that DyExplainer detects. ## 5. Related Work The goal of explainability in GNNs is to provide transparency and accountability in the predictions of the models, especially when they are used in critical applications such as detecting the fraud (Krizhevsky et al., 2017) or medical diagnosis (Krizhevsky et al., 2017). Recently, many works are proposed to explain the GNN predictions, with a focus on diverse facets of the models from various perspectives. According to the type of explanation they provide, the methodologies can be categorized into two main classes: instance-level and model-level methods (Zhu et al., 2018). The instance-level methods explain the GNN models by identifying the most influential features of the graph to the given prediction. Among them, some methods employ the gradients or the feature values to indicate the importance of features. SA (Beng et al., 2016) computes the gradient value as the importance score for each input feature. While it is easy to compute by the back-propagation, the results cannot accurately capture the importance of each feature due to the output changing minimally with respect to the input change, and a gradient value is hard to reflect the input contribution. CAM (Srivastava et al., 2017) maps the node features in the final layer to the input space to identify important nodes. However, the representation from the final layer of GNN may not reflect the node contribution because the feature distribution may change after mapping by a neural network. Another kind of method aims to provide instance-level explanations by a surrogate model that approximates the predictions of the original GNN model. GraphLine (Chen et al., 2017) provides a model-agnostic local explanation framework for the node classification task. It adopts the node feature and predicted labels in the K-hop neighbors of Figure 4. Heat map of edge importance over time. We sample some edges shared by continuous snapshots and show their attention values. 0.0 means this edge does not appear in this snapshot. the predicted node and trains an HSIC Lasso. GNNExplainer (GNNExplainer, 2017) takes a trained GNN and its predictions as inputs to provide explanations for a given instance, e.g. a node or a graph. The explanation includes a compact subgraph structure and a small subset of node features that are crucial in GNN's prediction for the target instance. However, the explanation provided by GNNExplainer is limited to a single instance, making GNNExplainer difficult to be applied in the inductive setting because the explanations are hard to generalize to other unexplained nodes. PGExplainer (Zhou et al., 2019) is proposed to provide a global understanding of predictions made by GNNs. It models the underlying structure as edge distributions where the explanatory graph is sampled. To explain the predictions of multiple instances, the generation process in PGExplainer is parameterized by a neural network. Similar to the PGExplainer, GraphMask (Girshick et al., 2018) trains a classifier to predict whether an edge can be dropped without affecting the original predictions. As a post-hoc method, GraphMask obtains an edge mask for each GNN layer. SubgraphX (Zhou et al., 2019) explains the GNN predictions as an efficient exploration of different subgraphs with Monte Carlo tree search. It adopts Shapley values as a measure of subgraph importance that also capture the interactions among different subgraphs. All of these methods are to provide explanations with respect to the GNN predictions. Different from them, this work provides a high-level understanding to explain the GNN models. The model-level methods that study what graph patterns can lead to a certain GNN behavior, e.g., the improvement of performance, is particularly related to us. There are fewer studies in this field. The method XGNN (Zhou et al., 2019) aims to explain GNNs by training a graph generator so that the generated graph patterns maximize a certain prediction of the model. The explanations by XGNN are general that provide a global understanding of the trained GNNs. Current explainers for GNNs are limited to static graphs, hindering their application in dynamic scenarios. The explanation of dynamic GNNs is an under-studied area. Indeed, there is a recent work (Zhou et al., 2019) attempted to provide explanations on dynamic graphs by exploring backward relevance, but it is limited to a specific model TGCN. Besides, the work in (Beng et al., 2019) considers explaining time series predictions by temporal GNNs. The work (Beng et al., 2019) learns attention weights for a linear combination of node representations dynamic graphs, but it explains the model by detecting the node importance. There are distinct challenges in explaining dynamic GNNs in general. Firstly, existing explanation methods mainly focus on identifying the important parts of the data in relation to the GNN predictions. However, dynamic GNNs make predictions for each snapshot, making it difficult to provide explanations for a particular prediction as it depends on all previous snapshots. Secondly, it is challenging to provide a generalizable solution to interpret various types of dynamic GNNs. DyExplainer, on the other hand, provides model-level explanations for all dynamic GNNs, overcoming these limitations. ## 6. Conclusion We present DyExplainer, a pioneering approach to explaining dynamic GNNs in real-time. DyExplainer leverages a dynamic GNN backbone to extract representations at each snapshot, concurrently exploring structural relationships and temporal dependencies through a sparse attention mechanism. To ensure structural consistency and temporal continuity in the explanation, our approach incorporates contrastive learning techniques and a buffer-based live-updating scheme. The results of our experiments showcase the superiority of DyExplainer, providing a faithful explanation of the model predictions while concurrently improving the accuracy of the model, as evidenced by the link prediction task. Figure 5. The quantitative studies for different explanation methods on UCI-Message, Bitcoin-OTC, and Bitcoin-Alpha. Figure 6. Heat map of temporal importance over time. The current snapshot is at 35. The buffer size is set to 10. It means we consider the importance of the previous snapshot from 27 to 34. Figure 7. Local structure of node 26 and 623 (red nodes) on snapshot 32. Orange points are their 3-hop neighborhoods. The green edges are obtained from the structural attention weights at snapshot 32.
2308.13721
Robust Machine Learning Modeling for Predictive Control Using Lipschitz-Constrained Neural Networks
Neural networks (NNs) have emerged as a state-of-the-art method for modeling nonlinear systems in model predictive control (MPC). However, the robustness of NNs, in terms of sensitivity to small input perturbations, remains a critical challenge for practical applications. To address this, we develop Lipschitz-Constrained Neural Networks (LCNNs) for modeling nonlinear systems and derive rigorous theoretical results to demonstrate their effectiveness in approximating Lipschitz functions, reducing input sensitivity, and preventing over-fitting. Specifically, we first prove a universal approximation theorem to show that LCNNs using SpectralDense layers can approximate any 1-Lipschitz target function. Then, we prove a probabilistic generalization error bound for LCNNs using SpectralDense layers by using their empirical Rademacher complexity. Finally, the LCNNs are incorporated into the MPC scheme, and a chemical process example is utilized to show that LCNN-based MPC outperforms MPC using conventional feedforward NNs in the presence of training data noise.
Wallace Tan Gian Yion, Zhe Wu
2023-08-26T01:14:47Z
http://arxiv.org/abs/2308.13721v1
# Robust Machine Learning Modeling for Predictive Control Using Lipschitz-Constrained Neural Networks ###### Abstract Neural networks (NNs) have emerged as a state-of-the-art method for modeling nonlinear systems in model predictive control (MPC). However, the robustness of NNs, in terms of sensitivity to small input perturbations, remains a critical challenge for practical applications. To address this, we develop Lipschitz-Constrained Neural Networks (LCNNs) for modeling nonlinear systems and derive rigorous theoretical results to demonstrate their effectiveness in approximating Lipschitz functions, reducing input sensitivity, and preventing over-fitting. Specifically, we first prove a universal approximation theorem to show that LCNNs using SpectralDense layers can approximate any 1-Lipschitz target function. Then, we prove a probabilistic generalization error bound for LCNNs using SpectralDense layers by using their empirical Rademacher complexity. Finally, the LCNNs are incorporated into the MPC scheme, and a chemical process example is utilized to show that LCNN-based MPC outperforms MPC using conventional feedforward NNs in the presence of training data noise. keywords: Lipschitz-Constrained Neural Networks; Robust Machine Learning Model; Generalization Error; Model Predictive Control; Neural Network Sensitivity; Over-fitting + Footnote †: journal: Journal of Computer Vision ## 1 Introduction Model Predictive Control (MPC) is an advanced optimization-based control strategy for various chemical engineering processes, such as batch crystallization processes (Kwon et al. (2013, 2014)) and continuous stirred tank reactors (CSTRs) (Chen et al. (1995); Wu (2001)). Machine learning (ML) techniques such as artificial neural networks (ANN) have been utilized to develop prediction models that will be incorporated into the design of MPC. Among various ML-based MPC schemes, an accurate and robust process model for prediction has always been one of the key components that ensures desired closed-loop performance. Despite the success of ANNs in modeling complex nonlinear systems, one prominent issue is that they could potentially be sensitive to small perturbations in input features. Such sensitivity issues arise naturally when a small change in the input (e.g., due to data noise, perturbation, or artificially generated adversarial inputs) could result in a drastic change in the output. For example, in classification problems, adversarial input perturbations have been shown to lead to a large variation in NN output, thereby leading to misclassified results (Szegedy et al. (2013)). Additionally, Balda et al. (2019) proposed a novel approach for constructing adversarial examples for regression problems using perturbation analysis for the underlying learning algorithm for the neural network. Since the lack of robustness of NNs could adversely affect the prediction accuracy in ML-based MPC, it is important to address the sensitivity issue for the implementation of NNs in performance-critical applications. To mitigate this issue, adversarial training has been adopted as one of the most effective approaches to train NNs against adversarial examples (Szegedy et al. (2013)). For example, Shaham et al. (2015) proposed a robust optimization framework with a different loss function in the optimization process that aims to search for adversarial examples near the vicinity of each training data point. This approach has been empirically shown to be effective against adversarial attacks by both Shaham et al. (2015) and Madry et al. (2017). However, there is a lack of a provably performance guarantee on the input sensitivity of neural networks for this approach. Therefore, a type of neural network with a fixed Lipschitz constant termed Lipschitz-Constrained Neural Networks (LCNNs) has received an emerging amount of interest in recent years (Baldi and Sadowski (2013); Anil et al. (2019)). One immediate way to control the Lipschitz constant of a neural network is to bound the norms of the weight matrices, and use activation functions with bounded derivatives (e.g., ReLU activation function). However, it is demonstrated in Anil et al. (2019) that this method substantially limits the model capacity of the NNs if component-wise activation functions are used. A recent breakthrough using a special activation function termed GroupSort (Anil et al. (2019)) significantly increases the expressive power of the NNs. Specifically, the GroupSort activation function enables NNs with norm-constrained GroupSort architectures to serve as universal approximators of 1-Lipschitz functions. Using the restricted Stone-Weierstrass theorem, Anil et al. (2019) proved the universal approximation theorem for GroupSort feedforward neural networks with appropriate assumptions on the norms of the weight matrices. However, since the proof in Anil et al. (2019) is based on the \(\infty\)-norm of matrices, the way to demonstrate the universal approximation property of LCNNs using the spectral norm (i.e., \(\ell_{2}\) matrix norm) remains an open question. The second prominent issue in the development of ANNs is over-fitting, where the networks perform very well on the training data but fail to predict accurately on the test data, which results in a high generalization error. One possible reason for over-fitting is that the training data has noise that negatively impacts learning performance (Ying (2019)). Additionally, over-fitting occurs when there is insufficient training data, or when there is a high hypothesis complexity, in terms of large weights, a large number of neurons, and extremely deep architectures. Therefore, designing neural network architectures that are less prone to over-fitting is a pertinent issue in supervised machine learning. Sabiri et al. (2022) provides an overview of popular solutions to prevent over-fitting. For example, one of the most common solutions to over-fitting is regularization, such as \(\ell_{1}\) or \(\ell_{2}\) regularization (e.g. Moore and DeNero (2011); Cortes et al. (2012)), which implements the size of the weights as a soft constraint. Other popular solutions include dropout (e.g. Baldi and Sadowski (2014); Srivastava et al. (2014)), where certain neurons are dropped out during training time with a certain specified probability, and early stopping, where the training is stopped using a predefined predicate, usually when the validation error reaches a minimum (e.g. Baldi and Sadowski (2013)). For example, in our previous work Wu et al. (2021), the Monte Carlo dropout technique was utilized in the development of NNs to mitigate the impact of data noise and reduce over-fitting. In addition to the above solutions, LCNNs have been demonstrated to be able to efficiently avoid over-fitting by constraining the Lipschitz constant of a network. However, at this stage, a fundamental understanding of the capability of LCNNs in reducing over-fitting in terms of the generalization ability of LCNNs over the underlying data distribution is missing. Motivated by the above considerations, in this work, we incorporate LCNNs using SpectralDense layers in MPC and demonstrate that the LCNNs can effectively resolve the two aforementioned issues: sensitivity to input perturbations and over-fitting in the presence of noise. Rigorous theoretical results are developed to demonstrate that LCNNs are provably robust against input perturbations because of their low Lipschitz constant and provably robust against over-fitting due to their lowered hypothesis complexity, i.e., low Rademacher complexity. The rest of this article is organized as follows. In Section 2, the nonlinear systems that are considered and the application method of FNNs in MPC are first presented. In Section 3, we present the formulation of LCNNs using SpectralDense layers, followed by a discussion on their improved robustness against input perturbations. In Section 4, we prove the universal approximation theorem for 1-Lipschitz continuous functions for LCNNs using SpectralDense layers. In Section 5, we develop a probabilistic generalization error bound for LCNNs to show that LCNNs using SpectralDense layers can effectively prevent over-fitting. This is done by computing an upper bound on the empirical Rademacher complexity method (ERC) of the function class represented by LCNNs using SpectralDense layers. Finally, in Section 6, we carry out a simulation study of a benchmark chemical reactor example, where we will exhibit the superiority of LCNNs over conventional FNNs with dense layers in the presence of noisy training data. ## 2 Preliminaries ### Notations \(\|W\|_{F}\) and \(\|W\|_{2}\) denote the Frobenius norm and the spectral norm of a matrix \(W\in\mathbb{R}^{n\times m}\) respectively. \(\mathbb{R}^{\geq 0}\) denotes the set of all nonnegative real numbers. A function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) is continuously differentiable if and only if it is differentiable and the Jacobian of \(f\), denoted by \(J_{f}\), is continuous. Given a vector \(x\in\mathbb{R}^{n}\), let \(\|x\|\) denote the Euclidean norm of \(x\). A metric space is a set \(X\) equipped with a metric function \(d_{X}:X\times X\rightarrow\mathbb{R}^{\geq 0}\) such that 1) \(d_{X}(x,y)=0\) if and only if \(x=y\), and 2) for all \(x,y,z\in X\), the triangle inequality \(d_{X}(x,z)\leq d_{X}(x,y)+d_{X}(y,z)\) holds. We denote a metric space as an ordered pair \((X,d_{X})\). Given an event \(A\), we denote \(\mathbb{P}(A)\) to be its probability. Given a random variable \(X\), we denote \(\mathbb{E}[X]\) to be its expectation. ### Class of Systems The nonlinear systems that are considered in this article can be represented by the following ordinary differential equation (ODE): \[\dot{x}=F(x,u):=f(x)+g(x)u \tag{1}\] Here \(x\in\mathbb{R}^{n}\) is the current state vector, \(u\in\mathbb{R}^{m}\) is the control vector, and \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\), \(g:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n\times m}\) are continuously differentiable matrix-valued functions. We also assume that \(f(0)=0\), so that the origin \((x,u)=(0,0)\) is an equilibrium point. We also assume that there is a Lyapunov function \(V:D\rightarrow\mathbb{R}^{\geq 0}\) that is continuously differentiable and is equipped with a controller \(\Phi:D\to U\), such that the origin \((x,u)=(0,0)\) is an equilibrium point that is exponentially (closed-loop) stable. Here \(D\) and \(U\) are compact subsets of \(\mathbb{R}^{n}\) and \(\mathbb{R}^{m}\), respectively, that contain an open set surrounding the origin. In addition, the stability region for this controller \(\Phi(x)\) is then taken to be a sublevel set of \(V\), i.e., \(\Omega_{\rho}:=\{x\mid V(x)\leq\rho\}\) with \(\rho\) positive and \(\Omega_{\rho}\subset D\). Additionally, since \(f,g\) are continuously differentiable, the following inequalities can be readily derived for any \(x,x^{\prime}\in D\), \(u\in U\) and some constants \(K_{F}\), \(L_{x}\): \[\|F(x,u)\|\leq K_{F} \tag{2a}\] \[\|F(x^{\prime},u)-F(x,u)\|\leq L_{x}\|x^{\prime}-x\| \tag{2b}\] ### Feedforward Neural Network (FNN) This subsection gives a short summary of the development of FNNs for the nonlinear systems represented by Eq. 1. Specifically, since the FNN is developed as the prediction model for model predictive controllers, we consider the FNNs that are built to capture the nonlinear dynamics of Eq. 1, whereby the control input actions are applied using the sample-and-hold method. This means that given an initial state \(x_{0}\), the control action \(u\) applied is constant throughout the entire period of the sampling time \(\Delta>0\). Suppose that the system state is currently at \(x(0)=x_{0}\). From Eq. 2b and the Picard-Lindelof Theorem for ODEs, there exists a unique state trajectory \(x(t)\) such that \[x(t)=x_{0}+\int_{0}^{t}F(x(s),u)ds \tag{3}\] We can then define \(\tilde{F}:\mathbb{R}^{n}\times\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\) by \(\tilde{F}(x_{0},u)=x(\Delta)\), which is the function that takes the current state and the control action as inputs and predicts the state at \(\Delta\) time later. This function can then be approximated by an FNN, such as an LCNN, and the approximation is denoted by \(\tilde{F}_{nn}\). In order to develop the neural networks, open-loop simulations of the ODE system represented by Eq. 1 using varying control actions will be conducted to generate the required dataset. Specifically, we perform a sweep of all possible initial states \(x_{0}\in\Omega_{\rho}\) and control actions \(u\in U\), and use the forward Euler method with integration time step \(h_{c}\ll\Delta\), to deduce the value of \(\tilde{F}(x_{0},u)\), that is, to deduce the state at \(t=\Delta\). The training dataset consists of all such possible pairs \((x_{0},u)\) (FNN inputs) \(\tilde{F}(x_{0},u)\) (FNN outputs), where \(\tilde{F}(x_{0},u)\) is the actual future state. Therefore, the two functions \(x_{t+1}:=\tilde{F}(x_{t},u_{t})\) and \(x_{t+1}:=\tilde{F}_{nn}(x_{t},u_{t})\) represent define two distinct nonlinear discrete-time systems respectively, where \(x_{t+1}\) denotes the state at \(t+\Delta\). Since \(\tilde{F}_{nn}\) will be the function used in the MPC optimization algorithm, ensuring that \(\tilde{F}_{nn}\) is an accurate approximation of \(\tilde{F}\) is necessary so that the neural network captures the nonlinear dynamics well. In general, the FNN model \(\tilde{F}_{nn}\) should be developed with sufficient training data and an appropriate architecture in terms of the number of neurons and layers in order to achieve the desired prediction accuracy on both training and test sets. However, in the presence of insufficient training data or noisy data, over-fitting might occur, leading to a model that performs poorly on the test data set and generalizes poorly. Also, the developed FNN model for \(\tilde{F}_{nn}\) should not be overtly sensitive to input perturbations (that is, it should not have overtly large gradients) in order to ensure that \(\tilde{F}_{nn}\) can generalize well to other data points outside the training dataset but within the desired domain \(D\times U\). Therefore, to address the issues of over-fitting and sensitivity, we will develop LCNNs for the nonlinear system of Eq. 1 in this work, and show that LCNNs can overcome the limitations of conventional FNNs with dense layers by lowering sensitivity and preventing over-fitting. ## 3 Lipschitz-Constrained Neural Network Models Using SpectralDense Layers In this section, the architecture of LCNNs using SpectralDense layers will first be introduced, followed by a discussion on the reduced sensitivity of LCNN to input perturbations as compared to conventional FNNs. First, we begin with an important definition. _Definition 1_.: A function \(f:X\to Y\) where \(X\subset\mathbb{R}^{n}\) and \(Y\subset\mathbb{R}^{m}\) is **Lipschitz continuous** with Lipschitz constant \(L\) (or \(L\)-Lipschitz) if \(\forall x,y\in X\), one has \[\|f(x)-f(y)\|\leq L\cdot\|x-y\|\] It is readily shown that if \(f\) is \(L\)-Lipschitz continuous, given a small perturbation to the input, the output \(f(x)\) changes by at most \(L\) times the magnitude of that perturbation. As a result, as long as the Lipschitz constant of a neural network is constrained to be a small value, it is less sensitive with respect to input perturbations. In the next subsection, we demonstrate that LCNNs using SpectralDense layers have a constrained small Lipschitz constant, where each of the SpectralDense layers has a Lipschitz constant of 1. ### SpectralDense layers The mathematical definition of the SpectralDense layers used to construct an LCNN is first presented. Recall that by the singular value decomposition (SVD), for any \(W\in\mathbb{R}^{m\times n}\), there exist orthogonal matrices \(U\in\mathbb{R}^{m\times m}\), \(V\in\mathbb{R}^{n\times n}\), and a rectangular diagonal matrix \(D\in\mathbb{R}^{m\times n}\) with positive entries such that \(W=UDV^{T}\). First, we recall the definition of a conventional dense layer: _Definition 2_.: A **dense layer** is a function \(f:\mathbb{R}^{n}\to\mathbb{R}^{m}\) of the form \[f:x\to\sigma(Wx+b)\] where \(\sigma\in\mathbb{R}^{m}\to\mathbb{R}^{m}\) is an activation function, \(W\in\mathbb{R}^{m\times n}\) is a weight matrix, and \(b\in\mathbb{R}^{n}\) is a bias term. A dense layer is a layer that is deeply connected with its preceding layer (i.e., the neurons of the layer are connected to every neuron of its preceding layer). It should be noted that in conventional dense layers, the activation function \(\sigma\) is applied component-wise, that is, \(\sigma(x_{1},x_{2},\ldots,x_{m})=(\sigma^{\prime}(x_{1}),\sigma^{\prime}(x_{2 }),\ldots,\sigma^{\prime}(x_{m}))\) where \(\sigma^{\prime}:\mathbb{R}\to\mathbb{R}\) is a real-valued function, such as ReLU or \(\tanh\). However, in SpectralDense layers, the following GroupSort function is used as the activation function \(\sigma\): _Definition 3_.: (Anil et al. (2019)) The **GroupSort function** (of group size 2) is a function \(\sigma:\mathbb{R}^{m}\rightarrow\mathbb{R}^{m}\) defined as follows: If \(m\) is even, then \[\sigma([x_{1},x_{2},\cdots,x_{m-1},x_{m}]^{T})=[\max(x_{1},x_{2}),\min(x_{1},x_{ 2}),\cdots,\max(x_{m-1},x_{m}),\min(x_{m-1},x_{m})]^{T}\] (4a) else, if \[m\] is odd, \[\sigma([x_{1},x_{2},\cdots,x_{m-2},x_{m-1},x_{m}]^{T})=[\max(x_{1},x_{2}),\min (x_{1},x_{2}),\cdots,\min(x_{m-2},x_{m-1}),x_{m}]^{T} \tag{4b}\] For example, in the case where the output layer has dimension \(m=4\), we have \[\sigma([0,3,4,2]^{T})=[3,0,4,2]^{T}\ \ \sigma([5,3,2,4]^{T})=[5,3,4,2]^{T} \tag{4c}\] SpectralDense layers can now be defined as follows: _Definition 4_.: (Serrurier et al. (2021)) **SpectralDense layers** are dense layers such that 1) the largest singular value of \(W\) is 1, and 2) the activation function \(\sigma\) is GroupSort function. Therefore, SpectralDense layers are similar to dense layers in terms of their structure, except that the activation function does not act component-wise and the weight matrices have a spectral norm of 1. The spectral norm of a matrix is also equal to the largest singular value in its SVD. Since the largest singular value of the weight matrix \(W\) is 1, the spectral norm \(\|W\|_{2}\) is also 1. The function \(\sigma:\mathbb{R}^{m}\rightarrow\mathbb{R}^{m}\) is also 1-Lipschitz continuous (with respect to the Euclidean norm), since it has Jacobian of spectral norm 1 almost everywhere (everywhere except a set of measure 0); thus it is also 1-Lipschitz continuous (see Theorem 3.1.6 in Federer (2014)). We therefore conclude that every SpectralDense layer is also 1-Lipschitz continuous. Next, the definition of the class of LCNNs is given as follows. _Definition 5_.: Let \(\mathcal{LN}_{n}^{m}\) be the class of Lipschitz-constrained neural networks (LCNNs) as follows: \[\begin{array}{l}\mathcal{LN}_{n}^{m}:=\{\,f\ |\ f:\mathbb{R}^{n}\rightarrow \mathbb{R}^{m}\,,\ \exists j\in\mathbb{N}\text{ such that }f=W_{j+1}f_{j}\circ f_{j-1}\circ...\circ f_{2}\circ f_{1},\\ \text{ where }f_{i}=\sigma(W_{i}x+b),\text{ and }\|W_{i}\|_{2}=1,i=1,...,j\,\} \end{array} \tag{5}\] where \(\sigma\) is the GroupSort activation with group size 2. Thus, each LCNN in \(\mathcal{LN}_{n}^{m}\) consists of many SpectralDense layers (i.e., \(W_{i}\), \(i=1,...,j\)) composed together, with a final weight matrix \(W_{j+1}\) at the end. The spectral norm constraint is imposed for all weight matrices except the final weight matrix \(W_{j+1}\). Since the Lipschitz constant for each of the functions \(f_{i}\), \(i=1,...,j\), is bounded by \(1\), it is readily shown that for each neural network in \(\mathcal{LN}_{n}^{m}\), the Lipschitz constant is bounded by the spectral norm of the final weight matrix \(W_{j+1}\). This implies that we can control the Lipschitz constant of an LCNN by manipulating the spectral norm of the final weight matrix \(W_{j+1}\), and this can be done in a variety of ways, such as imposing the constraint for the final weight matrix as a whole, or restricting the absolute value of each entry in the final weight matrix during the training process. In the simulation study in Section 5, we impose the absolute value constraint on each entry of the final weight matrix to control the Lipschitz constant of the LCNN. _Remark 1_.: The SpectralDense layers adopted in this work differ from those used in Anil et al. (2019), since the matrix norm used in this work is the spectral norm, whereas the norm used by Anil et al. is the \(\infty\)-norm. Although all matrix norms give rise to equivalent topologies (the same open sets), we use the spectral norm as it is directly related to the Jacobian of the function. Specifically, a well-known theorem by Rademacher (see Lemma 3.1.7 of Federer (2014) for a proof) states that if \(X\subset\mathbb{R}^{n}\) is open and \(f:X\to\mathbb{R}^{m}\) is \(L\)-Lipschitz continuous, then \(f\) is almost everywhere differentiable, and we have \[L=\sup_{x\in X}\|J_{f}(x)\|_{2} \tag{6}\] Therefore, it is observed that the Lipschitz constant using the Euclidean norm on the input space \(\mathbb{R}^{n}\) and output space \(\mathbb{R}^{m}\) is actually the supremum of the spectral norm of the Jacobian matrix of \(f\). If the \(\infty\)-norm were to be used, to the best of our knowledge, no such essential relationship that involves the Lipschitz constant has been proven to this date. ### Robustness of LCNNs We now discuss how LCNNs using SpectralDense layers can resolve the issue of sensitivity to input perturbations. Let \(f:X\to\mathbb{R}^{m}\) be a neural network that has been trained using a training algorithm and \(X\subset\mathbb{R}^{n}\) be an open subset such that \(f\) is almost everywhere differentiable with Jacobian \(J_{f}\). Given a set of training data, at each point \(x\in X\), one plausible way to maximize the impact of the input perturbation is to traverse along the direction corresponding to the largest eigenvalue (the spectral norm) of \(J_{f}(x)\) in its SVD, since this is the direction that leads to the largest variation in output (Szegedy et al. (2013); Goodfellow et al. (2014)). For any \(f\in\mathcal{LN}_{n}^{m}\), Eq. 5 shows that the Lipschitz constant of \(f\) is bounded by the spectral norm of the final weight matrix. If the spectral norm of the final matrix is small, the corresponding LCNN will have a small Lipschitz constant, making it difficult to perturb, even if we travel along the direction corresponding to the largest eigenvalue of \(J_{f}\). Specifically, if the input perturbation is of size \(\delta\), the output change is at most \(L\times\delta\), where \(L\) is constrained to be a small value. Therefore, one plausible method to reduce the sensitivity of neural networks to input perturbations is to constrain the Lipschitz constant of the networks. However, since the Lipschitz constant affects the network capacity, controlling the upper bound of the Lipschitz constant in LCNNs could result in a reduced network capacity. To address this issue, we will demonstrate in the next section that the function class \(\mathcal{LN}_{n}^{m}\) is a universal approximator for any Lipschitz continuous target function. Additionally, a pertinent question that arises is whether, in practice, the Lipschitz constants of conventional FNNs (e.g., FNNs using conventional dense layers and ReLU activation functions) are indeed much larger than those in LCNNs. In the special case of FNNs with ReLU activation functions, Bhowmick et al. (2021) have designed a provably correct approximation algorithm known as Lipschitz Branch and Bound (LipBaB), which obtains the Lipschitz constant of such networks on a compact rectangular domain. In Section 6.7, we will demonstrate empirically that with noisy training data, FNNs with dense layers could have a Lipschitz constant several orders of magnitude higher than that of LCNNs. ## 4 Universal Approximation Theorem for LCNNs This section develops the universal approximation theorem for LCNNs using SpectralDense layers, which demonstrates that the LCNNs with a bounded Lipschitz constant can approximate any nonlinear function as long as the target function is Lipschitz continuous. Before we present the proof for vector-valued LCNNs that are developed for nonlinear systems with vector-valued outputs such that of Eq. 1, we first develop a theorem that considers the approximation of real-valued functions. Then, the results for real-valued functions can be generalized to the multi-dimensional output case. We first define the real-valued function class as follows. \[\mathcal{LN}_{n}:=\{f\mid f:\mathbb{R}^{n}\rightarrow\mathbb{R},\, \exists j\in\mathbb{N},s.t.\,f=f_{j}\circ...f_{2}\circ f_{1},f_{i}=\sigma(W_{i}x +b),\|W_{i}\|_{2}=1,i=1,...,j\} \tag{7}\] where \(\sigma\) is the GroupSort activation with group size 2. The definition of Eq. 7 is similar to that of Eq. 5, except that the functions are real-valued, and the spectral norm for each weight matrix is 1, including the final weight matrix. Note that the final map \(f_{j}\) is an affine map, without any sorting, since the output of \(f_{j}\) is a single real number. It readily follows that any function in \(\mathcal{LN}_{n}\) is also 1-Lipschitz continuous since the spectral norm of the final weight matrix is one. Given a target function \(F:D\rightarrow\mathbb{R}\) where \(D\) is a compact and connected domain and \(F\) is Lipschitz continuous, we prove that real-valued functions from \(\mathcal{LN}_{n}\) are universal approximators of 1-Lipschitz functions, provided that we allow for an amplification of at most \(\sqrt{2}\) at the end. In principle, this implies that LCNNs can approximate any Lipschitz function (i.e., they are dense with respect to the uniform norm on a compact set). We will follow the notation in Anil et al. (2019), but with slight modifications. We first present the following definitions, which will be used in the proof of the universal approximation theorem. _Definition 6_.: Let \((X,d_{X})\) be a metric space. We use \(C_{L}(X,\mathbb{R})\) to denote the set of all 1-Lipschitz real-valued functions on \(X\). _Definition 7_.: Let \(A\) be a set of functions from \(\mathbb{R}^{n}\) to \(\mathbb{R}\), and let \(k\) be a real number. We define \(kA\) as follows. \[kA:=\{\;cf\mid|c|\leq k,\;f\in A\} \tag{8}\] _Definition 8_.: A **lattice**\(\mathcal{L}\) in \(C_{L}(X,\mathbb{R})\) is a set of functions that is closed under point-wise maximums and minimums, that is, \(\forall f,g\in\mathcal{L}\), \(\min(f,g),\max(f,g)\in\mathcal{L}\). The following restricted Stone-Weierstrass theorem allows us to approximate 1-Lipschitz continuous functions using lattices. **Theorem 1** (Restricted Stone-Weierstrass Anil et al. (2019)).: _Let \((X,d_{X})\) be a compact metric space and \(\mathcal{L}\) be a lattice in \(C_{L}(X,\mathbb{R})\). Suppose that for all \(a,b\in\mathbb{R}\) and \(x,y\in X\) such that \(|a-b|\leq d_{X}(x,y)\), there exists an \(f\in\mathcal{L}\) such that \(f(x)=a\) and \(f(y)=b\). Then \(\mathcal{L}\) is dense in \(C_{L}(X,\mathbb{R})\) with respect to the uniform topology, that is, for every \(\epsilon>0\) and for every \(f\in C_{L}(X,\mathbb{R})\), there exists an \(\tilde{f}\in\mathcal{L}\) such that_ \[\sup_{x\in X}\,|f(x)-\tilde{f}(x)|<\epsilon \tag{9}\] The proof of Theorem 1 is given in Anil et al. (2019) and is omitted here. Based on Theorem 1, we develop the following theorem to prove that the LCNN networks constructed using SpectralDense layers can also serve as universal approximators, if we allow for an amplification of the output at the end. The proof uses Theorem 1 to show that \(\sqrt{2}\mathcal{LN}_{n}\cap C_{L}(D,\mathbb{R})\) is a lattice. The proof techniques and structure are similar to those in Anil et al. (2019), while the key difference is the use of the SVDs of the weight matrices since we are using the spectral norm instead. **Theorem 2**.: _Let \(D\subset\mathbb{R}^{n}\) be a compact subset, and \(\mathcal{LN}_{n}\) be the set of LCNNs defined in Eq. 5. \(\sqrt{2}\mathcal{LN}_{n}\cap C_{L}(D,\mathbb{R})\) is dense in \(C_{L}(D,\mathbb{R})\) with respect to the uniform topology, i.e., for every \(\epsilon>0\) and for every \(f\in C_{L}(D,\mathbb{R})\), there exists an \(\tilde{f}\in\sqrt{2}\mathcal{LN}_{n}\cap C_{L}(D,\mathbb{R})\) such that_ \[\sup_{x\in D}|f(x)-\tilde{f}(x)|<\epsilon \tag{10}\] Proof.: Theorem 2 states that if we allow for an amplification of \(\sqrt{2}\) at the end, then the LCNNs using SpectralDense layers defined in Eq. 5 can approximate 1-Lipschitz continuous functions arbitrarily accurately. To prove Eq. 10, we first show that \(\mathcal{L}:=\sqrt{2}\mathcal{LN}_{n}\cap C_{L}(D,\mathbb{R})\) satisfies the assumptions needed for the restricted Stone-Weierstrass theorem. Note that for all \(a,b\in\mathbb{R}\) and \(x,y\in D\) such that \(|a-b|\leq\|x-y\|\), there exists an \(f\in\mathcal{L}\) such that \(f(x)=a\) and \(f(y)=b\). To construct such an \(f\), let \(f(v)=w^{T}(v-x)+a\) and choose \(w\in\mathbb{R}^{n}\) with \(\|w\|=1\) carefully so that this holds. We need to ensure that \(w^{T}(y-x)=b-a\) such that \(f(y)=b\). Therefore, we need to choose \(w\) such that the following equation holds. \[b-a=w^{T}(y-x)\leq\|w\|\cdot\|y-x\|=\|y-x\| \tag{11}\] We can first set \(w\) to be in the same direction as \(y-x\), and then gradually rotate \(w\) away from \(y-x\) (for example, using a suitable orthogonal matrix) so that we eventually have \(b-a=w^{T}(y-x)\). Next, we need to show that \(\mathcal{L}\) is a lattice. This is equivalent to showing that \(\mathcal{L}\) is closed under pointwise maximums and minimums by Definition 8. Following the same proof as in Anil et al. (2019), we assume that \(f,g\in\mathcal{L}\) are defined by their weights and biases: \[[W_{1}^{f},b_{1}^{f},W_{2}^{f},b_{2}^{f},\ldots,W_{d_{f}}^{f},b_{d_{f}}^{f}]~{}~ {}[W_{1}^{g},b_{1}^{g},W_{2}^{g},b_{2}^{g},\ldots,W_{d_{g}}^{g},b_{d_{g}}^{g}] \tag{12}\] where \(d_{f}\) and \(d_{g}\) represent the depths of the networks \(f\) and \(g\), respectively. We assume without loss of generality that the two neural networks \(f\) and \(g\) have the same depth, i.e., \(d_{f}=d_{g}\). This is possible since if \(d_{f}>d_{g}\), one can pad the neural network with identity matrix weights and zero biases until they are of the same depth, and likewise for \(d_{f}<d_{g}\). We also assume without loss of generality that each of the weight matrices (except the final weight matrix) has an even number of rows. In the case where the neural network has a weight matrix with an odd number of rows, the weight matrix can be padded with a zero row under the last row and with a bias \(-M\) where \(M>0\) is sufficiently large in that row (this is possible since \(D\) is compact and by the extreme value theorem). This is to prevent a different sorting configuration of the output vector of that layer. Then, in the next matrix, we add a column of zeros to remove the \(-M\) entry. We now construct a neural network \(h\) for \(\max(f,g)\) and \(\min(f,g)\) with new weights such that each of the weights satisfies \(\|W_{i}^{h}\|_{2}=1\) for \(i=1,\cdots,d_{f}\), and the scaling factor of \(\sqrt{2}\) will be applied later. We first construct a suitable neural network with the layers of \(f\) and \(g\) side by side, but with some modifications. Specifically, the first matrix \(W_{1}^{h}\) and the first bias \(b_{1}^{h}\) are designed as follows: \[W_{1}^{h}=c[W_{1}^{f},W_{1}^{g}]^{T}~{}~{}b_{1}^{h}=[b_{1}^{f},b_{1}^{g}] \tag{13}\] where \(c\) is a positive constant chosen based on \(W_{1}^{f}\) and \(W_{1}^{g}\) to ensure that \(\|W_{1}^{h}\|_{2}=1\). From Eq. 13, it is shown that the output of the first layer is obtained by concatenating the output of the first layer of \(f\) and \(g\) together, and multiplied by a positive constant \(c\). Note that \(1\leq c\leq\frac{1}{\sqrt{2}}\) since each of \(W_{1}^{f}\) and \(W_{1}^{g}\) have spectral norms \(1\). Then, for the rest of the layers (\(i\geq 2\)), we define \[W_{i}^{h}=\begin{bmatrix}W_{i}^{f}&0\\ 0&W_{i}^{g}\end{bmatrix} \tag{14}\] To prove that \(\|W_{i}^{h}\|_{2}=1\), we use the singular value decomposition. \[W_{i}^{h} =\begin{bmatrix}W_{i}^{f}&0\\ 0&W_{i}^{g}\end{bmatrix}=\begin{bmatrix}U_{i}^{f}D_{i}^{f}{V_{i}^{f}}^{*}&0\\ 0&U_{i}^{g}D_{i}^{g}{V_{i}^{g}}^{*}\end{bmatrix} \tag{15a}\] \[=\begin{bmatrix}U_{i}^{f}&0\\ 0&U_{i}^{g}\end{bmatrix}\begin{bmatrix}D_{i}^{f}&0\\ 0&D_{i}^{g}\end{bmatrix}\begin{bmatrix}{V_{i}^{f}}^{*}&0\\ 0&{V_{i}^{g}}^{*}\end{bmatrix}\] (15b) \[=UDV^{T} \tag{15c}\] It is noted that \(\|W_{i}^{h}\|_{2}\) is simply the largest singular value of its singular value decomposition. On the right-hand side (RHS) of Eq. 15c, even if the matrix \(D\) is not rectangular block diagonal, we can permute the columns of \(D\) and the rows of \(V^{T}\) simultaneously, and then the rows of \(D\) and columns of \(U\) simultaneously, to obtain a new rectangular block diagonal matrix. Permuting the columns of \(U\) and the rows of \(V^{T}\) does not change the unitary property of these matrices. Therefore, the largest singular value of \(W_{i}^{h}\) is 1 since both \(D_{i}^{f}\) and \(D_{i}^{g}\) have the largest singular values of 1, and therefore \(\|W_{i}^{h}\|_{2}=1\). We also take the following. \[b_{i}^{h}=c[b_{i}^{f},b_{i}^{g}] \tag{16}\] for each of the biases. Finally, after passing through the GroupSort activation function, the output of the last layer is \[c[\max(f(x),g(x)),\min(f(x),g(x))]^{T} \tag{17}\] By passing Eq. 17 through the weight matrix \([1,0]\) or \([0,1]\), we obtain \(h(x)=c\max(f(x),g(x))\) or \(h(x)=c\min(f(x),g(x))\) respectively. Since we consider the set \(\mathcal{L}=\sqrt{2}\mathcal{LN}_{n}\cap C_{L}(D,\mathbb{R})\), and \(1\leq c^{-1}\leq\sqrt{2}\), we observe that \(c^{-1}\,h(x)=\max(f(x),g(x))\) or \(c^{-1}\,h(x)=\min(f(x),g(x))\). This proves that \(\max(f(x),g(x))\) and \(\min(f(x),g(x))\) are in \(\sqrt{2}\mathcal{LN}_{n}\cap C_{L}(D,\mathbb{R})\), which implies that \(\mathcal{L}\) is a lattice. Therefore, \(\sqrt{2}\mathcal{LN}_{n}\cap C_{L}(D,\mathbb{R})\) satisfies the assumptions needed for the restricted Stone-Weierstrass theorem in Theorem 1, and this completes the proof. Theorem 2 implies that for 1-Lipschitz target functions, if we allow for an amplification of \(\sqrt{2}\) at the last layer, then LCNNs using SpectralDense layers are universal approximators for the target function. Similarly, LCNNs can approximate an \(L\)-Lipschitz continuous function so long as an amplification of \(\sqrt{2}L\) is allowed. The final amplification can be easily implemented by using a suitable weight matrix such as a constant multiple of the identity matrix as the final layer. _Remark 2_.: The above theorem can be generalized to regression problems from \(\mathbb{R}^{n}\) to \(\mathbb{R}^{m}\). Specifically, a similar result can be derived for LCNNs developed for \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) by approximating each component function \(f_{i}\) for \(i=1,...,m\) with a 1-Lipschitz neural network, and then padding the neural networks together to form a large neural network with \(m\) outputs. However, note that, as a result, the resulting approximating function might be \(\sqrt{m}\)-Lipschitz continuous. ## 5 Preventing Over-fitting and Improving Generalization with LCNNs In this section, we provide a theoretical argument using the empirical Rademacher complexity to show that LCNNs can prevent over-fitting and therefore generalize better than conventional FNNs by computing the empirical Rademacher complexity of the LCNNs, and comparing that with usual FNNs made using the same architecture (i.e., same number of neurons in each layer and same depth). Specifically, we develop a bound of the empirical Rademacher complexity (ERC) for any FNN that utilizes the GroupSort function (hereby called GroupSort Neural Networks), and subsequently use this to obtain a bound for LCNNs using SpectralDense layers as an immediate corollary. Then, we compare this bound with the bound for the empirical Rademacher complexity of FNNs using 1-Lipschitz component-wise activation functions (e.g. ReLU) and show that the GroupSort NNs achieve a tighter generalization error bound. ### Assumptions and Preliminaries We first assume that the input domain is a bounded subset \(\mathcal{X}\subset\mathbb{R}^{d_{x}}\), where for each \(x\in\mathcal{X}\), one has \(\|x\|\leq B\), and the output space is a subset of the vector space \(\mathcal{Y}\subset\mathbb{R}^{d_{y}}\). We also assume that each input-output pair in the dataset \((x,y)\) is drawn from some distribution \(\mathcal{D}\subset\mathcal{X}\)\(\times\)\(\mathcal{Y}\) with some probability distribution \(\mathbb{P}\). We assume that \(L:\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}^{\geq 0}\) is a loss function that satisfies the following two properties: 1) there exists a positive \(M>0\) such that for all \(y,y^{\prime}\in\mathbb{R}\)\(L(y,y^{\prime})\leq M\), and 2) for all \(y^{\prime}\in\mathbb{R}^{d_{y}}\), the function \(y\to L(y,y^{\prime})\) is \(L_{r}\)-Lipschitz for some \(L_{r}>0\). Let \(h:\mathcal{X}\rightarrow\mathcal{Y}\) be a function that represents a neural network model (termed hypothesis) in a hypothesis class. _Definition 9_.: We define **generalization error** as \[\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}\left[L(h(x),y)\right]=\int_{\mathcal{X }\times\mathcal{Y}}L(h(x),y)\,\mathbb{P}(dx\times dy) \tag{18}\] _Definition 10_.: For any dataset \(S=\{s_{1},s_{2},\cdots,s_{m}\}\subset\mathcal{X}\times\mathcal{Y}\) with \(s_{i}=(x_{i},y_{i})\), we define the **empirical error** as \[\hat{\mathbb{E}}_{S}[L(h(x),y)]=\frac{1}{m}\sum_{i=1}^{m}L(h(x_{i}),y_{i}) \tag{19}\] The generalization error is a measure of how well a hypothesis \(h:\mathcal{X}\rightarrow\mathcal{Y}\) generalizes from the training dataset to the entire domain being considered. On the other hand, the empirical error is a measure of how well the hypothesis \(h\) performs on the available data points \(S\). _Remark 3_.: In this work, we use the \(\ell_{2}\) error for the loss function \(L\) in the NN training process, i.e., \(L(y,y^{\prime})=\|y-y^{\prime}\|^{2}\), which is locally Lipschitz continuous but not globally Lipschitz continuous. However, since we consider an input domain of \(D\times U\) which is a compact set, and the function \(\tilde{F}\) is also continuous, the range of \(\tilde{F}\) is also compact and bounded. As a result, the output domain \(\mathcal{Y}\subset\mathbb{R}^{d_{y}}\) is bounded, and the assumptions for the loss function \(L(\cdot,\cdot)\) are satisfied using the \(\ell_{2}\) error. ### Empirical Rademacher Complexity Bound for GroupSort Neural Networks We first define the empirical Rademacher complexity of a real-valued function hypothesis class. _Definition 11_.: Given an input domain space \(\mathcal{X}\subset\mathbb{R}^{d_{x}}\), suppose \(H\) is a class of real-valued functions from \(\mathcal{X}\) to \(\mathbb{R}\). Let \(S=\{x_{1},x_{2},\cdots,x_{m}\}\subset\mathcal{X}\), which is a set of samples from \(\mathcal{X}\). The **empiricial Rademacher complexity** (ERC) of \(S\) with respect to \(H\), denoted by \(\mathcal{R}_{S}(H)\), is defined as: \[\mathcal{R}_{S}(H):=\mathop{\mathbb{E}}_{\epsilon}\sup_{h\in H}\frac{1}{m} \sum_{i=1}^{m}\epsilon_{i}h(x_{i}) \tag{20}\] where each of \(\epsilon_{i}\) are i.i.d Rademacher variables, i.e., with \(\mathbb{P}(\epsilon_{i}=1)=\frac{1}{2}\) and \(\mathbb{P}(\epsilon_{i}=-1)=\frac{1}{2}\). The ERC measures the richness of a real-valued function hypothesis class with respect to a probability distribution. A more complex hypothesis class with a larger ERC is likely to represent a richer variety of functions, but may also lead to over-fitting, especially in the presence of noise or insufficient data. The ERC is often used to obtain a probabilistic bound on the generalization error in statistical machine learning. Specifically, we first present the following theorem in Mohri et al. (2018) that obtains a probabilistic upper bound on the generalization error for a hypothesis class. **Theorem 3** (Theorem 3.3 in Mohri et al. (2018)).: _Let \(\mathcal{H}\) be a hypothesis class of functions \(h:\mathcal{X}\subset\mathbb{R}^{d_{x}}\rightarrow\mathcal{Y}\subset\mathbb{R}^{d _{y}}\) and \(L:\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}^{\geq 0}\) be the loss function that satisfies the properties in Section 5.1. Let \(\mathcal{G}\) be a hypothesis class of loss functions associated with the hypotheses \(h\in\mathcal{H}\):_ \[\mathcal{G}:=\{g:(x,y)\to L(h(x),y)\,,\,h\in\mathcal{H}\} \tag{21}\] _For any set of \(m\) i.i.d. training samples \(S=\{s_{1},s_{2},\cdots,s_{m}\}\), \(s_{i}=(x_{i},y_{i})\), \(i=1,...,m\), drawn from a probability distribution \(\mathcal{D}\subset\mathcal{X}\times\mathcal{Y}\), for any \(\delta\in(0,1)\), the following upper bound holds with probability \(1-\delta\) :_ \[\underset{(x,y)\sim\mathcal{D}}{\mathrm{E}}[L(h(x),y)]\leq\frac{1}{m}\sum_{i=1 }^{m}L(h(x_{i}),y_{i})+2\mathcal{R}_{S}(\mathcal{G})+3M\sqrt{\frac{\log\frac{ 1}{\delta}}{2m}} \tag{22}\] Eq. 21 in Theorem 3 explains why an overtly large hypothesis class with a high degree of complexity could inevitably increase the generalization error. Specifically, if the hypothesis class is overtly large, the first term in the RHS of Eq. 21 (i.e., the empirical error \(\hat{E}_{S}[L(h(x),y)]=\sum_{i}L(h(x_{i}),y_{i})\)) is expected to be sufficiently small, since one could find a hypothesis \(h\) that minimizes the training error using an appropriate training algorithm. However, the trade-off is that the ERC \(\mathcal{R}_{S}(\mathcal{G})\) will increase with the size of the hypothesis class. On the contrary, if the hypothesis class is limited, the model complexity represented by the ERC of \(\mathcal{R}_{S}(\mathcal{G})\) is reduced, while the empirical error \(\hat{E}_{S}[L(h(x),y)]\) is expected to increase. Since the ERC of the hypothesis class of loss functions \(\mathcal{R}_{S}(\mathcal{G})\) appears on the RHS of the inequality above, it implies that a lower ERC leads to a tighter generalization error bound. Therefore, in this section, we demonstrate that LCNNs using SpectralDense layers have a smaller ERC \(\mathcal{R}_{S}(\mathcal{G})\) than conventional FNNs using dense layers. Since \(\mathcal{Y}\subset\mathbb{R}^{d_{y}}\) is a subset of a real vector space, while the definition of ERC in Definition 11 is with respect to a real-valued class of functions, to simplify the discussion, we can first apply the following contraction inequality. _Lemma 4_ (Vector Contraction Inequality Maurer (2016)).: Let \(\mathcal{H}\) be a hypothesis class of functions from \(\mathcal{X}\subset\mathbb{R}^{d_{x}}\) to \(\mathcal{Y}\subset\mathbb{R}^{d_{y}}\). For any set of data points \(S=\{s_{1},s_{2},\cdots,s_{m}\}\subset\mathcal{X}\times\mathcal{Y}\) and \(L_{r}\)-Lipschitz loss function \(y\to L(y,y^{\prime})\) for some \(L_{r}>0\), we have \[\mathcal{R}_{S}(\mathcal{G})=\mathbb{E}\sup_{\epsilon}\frac{1}{n \in\mathcal{H}}\frac{1}{m}\sum_{i=1}^{m}\epsilon_{i}L(h(x_{i}),y)\leq\sqrt{2} L_{r}\mathbb{E}\sup_{\epsilon}\frac{1}{m}\sum_{i=1}^{m}\sum_{k=1}^{d_{y}} \epsilon_{ik}h_{k}(x_{i}) \tag{23}\] where \(h_{k}\) is the \(k^{\text{th}}\) component function of \(h\in\mathcal{H}\) and each of \(\epsilon_{ik}\) are i.i.d. Rademacher variables. The proof of the above inequality can be found in Maurer (2016), and is omitted here. The RHS of the inequality in Eq. 23 can be bounded by taking out the sum in the supremum: \[\mathbb{E}\sup_{\epsilon}\frac{1}{m}\sum_{i=1}^{m}\sum_{k=1}^{d_{y}}\epsilon_ {ik}h_{k}(x_{i})\leq\sum_{k=1}^{d_{y}}\mathbb{E}\sup_{\epsilon}\frac{1}{m} \sum_{i=1}^{m}\epsilon_{ik}h_{k}(x_{i}) \tag{24}\] where each of the \(\mathcal{H}_{k}\) is a real-valued function class that correspond to the \(k^{\text{th}}\) component function of each function in \(h\in\mathcal{H}\). The RHS of Eq. 24 is the sum of the Rademacher complexities of the hypothesis classes \(\mathcal{H}_{k}\). Therefore, to obtain an upper bound for \(\mathcal{R}_{S}(\mathcal{G})\), we can first consider the case of a real-valued function hypothesis class, and then extend the results to the multidimensional case by applying Eq. 24. Specifically, we first develop an ERC bound for GroupSort Neural Networks, which refers to any FNN that utilizes the GroupSort activation function. Since LCNNs using SpectralDense layers are FNNs that also utilize the GroupSort activation function, the ERC bound for LCNNs using SpectralDense layers will follow as an immediate corollary. The following definitions are first presented to define the classes of real-valued GroupSort neural networks and of real-valued LCNNs using SpectralDense layers. _Definition 12_.: We use \(\mathcal{H}_{d}\) to denote the hypothesis class of GroupSort neural networks with depth \(d\) that map the input domain \(\mathcal{X}\subset\mathbb{R}^{d_{x}}\) to \(\mathbb{R}\): \[x\to W_{d}\,\sigma(W_{d-1}\,\sigma(\cdots\sigma(W_{1}x))) \tag{25}\] where each of the weight matrices \(W_{i}\in\mathbb{R}^{m_{i}\times n_{i}}\) has a bounded Frobenius norm, that is, \(\|W_{i}\|_{F}\leq R_{i}\) for some \(R_{i}\geq 0\), \(m_{i}\) and \(n_{i}\) are even for all \(i=1,...,d-1\) except for \(n_{1}\), and \(\sigma\) is the GroupSort function with group size 2. _Definition 13_.: We use \(\mathcal{H}_{d}^{SD}\) to denote the hypothesis class of LCNNs using SpectralDense layers with depth \(d\) that map the input domain \(\mathcal{X}\subset\mathbb{R}^{d_{x}}\) to \(\mathbb{R}\): \[x\to W_{d}\,\sigma(W_{d-1}\,\sigma(\cdots\sigma(W_{1}x))) \tag{26}\] where each of the weight matrices satisfies \(\|W_{i}\|_{2}=1\), \(m_{i}\) and \(n_{i}\) are even for all \(i=1,...,d-1\) except for \(n_{1}\), and \(\sigma\) is the GroupSort function with group size 2. Note that the key difference between \(\mathcal{H}_{d}\) and \(\mathcal{H}_{d}^{SD}\) is that \(\mathcal{H}_{d}\) is defined for any FNN using GroupSort activation functions, while \(\mathcal{H}_{d}^{SD}\) is defined for LCNNs that use the GroupSort activation function and satisfy \(\|W_{i}\|_{2}=1\). Additionally, the Frobenius norm is used in \(\mathcal{H}_{d}\), while the spectral norm is used in \(\mathcal{H}_{d}^{SD}\). Despite the differences between \(\mathcal{H}_{d}\) and \(\mathcal{H}_{d}^{SD}\), it will be demonstrated from the following lemmas and theorems that the two classes \(\mathcal{H}_{d}\) and \(\mathcal{H}_{d}^{SD}\) are highly related. _Remark 4_.: The bias term is ignored in Eq. 25 and Eq. 26 to simplify the formulation, as in principle, we can take into account the bias term by padding the input with a vector consisting of ones, and introducing another weight matrix into each of the \(W_{i}\). Additionally, we assume that \(m_{i}\) and \(n_{i}\) are even without any loss of generality, since this does not affect the expressiveness of the class of networks using LCNNs using SpectralDense layers, as shown in the proof of Theorem 2. Next, we develop a bound on the ERC of \(\mathcal{H}_{d}\), i.e., \(\mathcal{R}_{S}(\mathcal{H}_{d})\), and then the ERC bound for \(\mathcal{H}_{d}^{SD}\) follows as an immediate corollary. The main intuition to obtain such an upper bound for \(\mathcal{R}_{S}(\mathcal{H}_{d})\) is to recursively "peel off" the weight matrices and activation functions. Such methods were used in Golowich et al. (2018); Neyshabur et al. (2015); Wu et al. (2021, 2022), where the activation function was applied element-wise. The key difficulty in our setting stems from the fact that the GroupSort activation function \(\sigma\) is not an element-wise function. To address this issue, we first represent the functions \(\max(a,b)\) and \(\min(a,b)\) as follows: \[\max(a,b)=\frac{1}{2}(a+b+|b-a|)\ \ \min(a,b)=\frac{1}{2}(a+b-|b-a|) \tag{27}\] Before we present the results for peeling off the weight matrices of FNNs, the following definition is first given and will be used in the proof of Lemma 5 that peels off one GroupSort activation function layer. _Definition 14_.: For \(d\geq 1\), we define \(\tilde{\mathcal{H}}_{d}\) as the class of (vector-valued) functions on the input domain \(\mathcal{X}\subset\mathbb{R}^{d_{x}}\) of the form \[x\to\sigma(W_{d}\,\sigma(\cdots\sigma(W_{1}x))) \tag{28}\] where each of the weight matrices \(W_{i}\in\mathbb{R}^{m_{i}\times n_{i}}\) has a bounded Frobenius norm, that is, \(\|W_{i}\|_{F}\leq R_{i}\) for some \(R_{i}\geq 0\), \(m_{i}\) and \(n_{i}\) are even for all \(i=1,...,d\) except \(n_{1}\), and \(\sigma\) is the GroupSort function with group size \(2\). If \(d=0\), we define \(\tilde{H}_{0}\) as a hypothesis class that contains only the identity map on \(\mathcal{X}\). Note that Definition 14 is very similar to Definition 12, but the last layer has been removed, so that the resultant hypothesis class is vector-valued. Subsequently, the following lemma provides a way to "peel off" layers using the GroupSort function, which is the main tool used in the derivation of ERC for GroupSort NNs. _Lemma 5_.: Let \(\tilde{H}_{d}\) be a vector-valued hypothesis class defined in Definition 14, with \(d\geq 1\), and suppose that \(\|W_{d}\|_{F}\leq R_{d}\). For any dataset with \(m\) data points, we have the following inequality: \[\operatorname*{\mathbb{E}}_{\epsilon}\sup_{h\in\tilde{\mathcal{H}}_{d}}\left\| \frac{1}{m}\sum_{i=1}^{m}\epsilon_{i}h(x_{i})\right\|\leq 2R_{d}\operatorname*{ \mathbb{E}}_{\epsilon}\sup_{h\in\tilde{\mathcal{H}}_{d-1}}\frac{1}{m}\left\| \sum_{i=1}^{m}\epsilon_{i}h(x_{i})\right\| \tag{29}\] Proof.: Letting \(w_{1}^{T},w_{2}^{T},\cdots,w_{k}^{T}\) represent the rows of \(W_{d}\), we have \[\operatorname*{\mathbb{E}}_{\epsilon}\sup_{h\in\tilde{\mathcal{H }}_{d}}\left\|\frac{1}{m}\sum_{i=1}^{m}\epsilon_{i}h(x_{i})\right\| =\operatorname*{\mathbb{E}}_{\epsilon}\sup_{\|W_{d}\|_{F}\leq R_{ d},h\in\tilde{\mathcal{H}}_{d-1}}\left\|\frac{1}{m}\sum_{i=1}^{m}\epsilon_{i} \sigma(W_{d}h(x_{i}))\right\| \tag{30a}\] \[=\operatorname*{\mathbb{E}}_{\epsilon}\sup_{\|W_{d}\|_{F}\leq R_ {d},h\in\tilde{\mathcal{H}}_{d-1}}\left\|\frac{1}{m}\sum_{i=1}^{m}\epsilon_{i }\sigma\begin{bmatrix}w_{1}^{T}h(x_{i})\\ w_{2}^{T}h(x_{i})\\ \vdots\\ w_{k}^{T}h(x_{i})\end{bmatrix}\right\| \tag{30b}\] By expanding the components of \(\sigma\) using the identities for the maximum and minimum in Eqs. 27, Eq. 30 can be written as \[\operatorname*{\mathbb{E}}_{\epsilon}\sup_{\|W_{d}\|_{F}\leq R_{d},h\in\tilde {\mathcal{H}}_{d-1}}\left\|\frac{1}{2m}\sum_{i=1}^{m}\epsilon_{i}\begin{bmatrix} w_{1}^{T}h(x_{i})+w_{2}^{T}h(x_{i})+|w_{1}^{T}h(x_{i})-w_{2}^{T}h(x_{i})|\\ w_{1}^{T}h(x_{i})+w_{2}^{T}h(x_{i})-|w_{1}^{T}h(x_{i})-w_{2}^{T}h(x_{i})|\\ \vdots\\ w_{k-1}^{T}h(x_{i})+w_{k}^{T}h(x_{i})+|w_{k-1}^{T}h(x_{i})-w_{k}^{T}h(x_{i})| \end{bmatrix}\right\| \tag{31}\] We can bound Eq. 31 using the triangle inequality with \(A_{1}\) and \(A_{2}\) defined as follows. \[A_{1}= \mathbb{E}\sup_{\epsilon}\limits_{\|W_{d}\|_{F}\leq R_{d},h\in \tilde{\mathcal{H}}_{d-1}}\left\|\frac{1}{2m}\sum_{i=1}^{m}\epsilon_{i} \begin{bmatrix}w_{1}^{T}h(x_{i})\\ w_{2}^{T}h(x_{i})\\ \vdots\\ w_{k-1}^{T}h(x_{i})\\ w_{k}^{T}h(x_{i})\end{bmatrix}\right\|+\mathbb{E}\sup_{\epsilon}\limits_{\|W_{d }\|_{F}\leq R_{d},h\in\tilde{\mathcal{H}}_{d-1}}\left\|\frac{1}{2m}\sum_{i=1}^{ m}\epsilon_{i}\begin{bmatrix}w_{2}^{T}h(x_{i})\\ w_{1}^{T}h(x_{i})\\ \vdots\\ w_{k}^{T}h(x_{i})\\ w_{k-1}^{T}h(x_{i})\end{bmatrix}\right\| \tag{32a}\] \[A_{2}=\mathbb{E}\sup_{\|W_{d}\|_{F}\leq R_{d},h\in\tilde{ \mathcal{H}}_{d-1}}\left\|\frac{1}{2m}\sum_{i=1}^{m}\epsilon_{i}\begin{bmatrix} |w_{1}^{T}h(x_{i})-w_{2}^{T}h(x_{i})|\\ -|w_{1}^{T}h(x_{i})-w_{2}^{T}h(x_{i})|\\ \vdots\\ |w_{k-1}^{T}h(x_{i})-w_{k}^{T}h(x_{i})|\\ \end{bmatrix}\right\| \tag{32b}\] We first bound \(A_{1}\) appropriately. By noting that the first term in Eq. 32a is equal to the second term since we only permuted the components, the following equation is derived for \(A_{1}\). \[A_{1}=\mathbb{E}\sup_{\epsilon}\limits_{\|W_{d}\|_{F}\leq R_{d},h\in\tilde{ \mathcal{H}}_{d-1}}\left\|\frac{1}{m}\sum_{i=1}^{m}\epsilon_{i}\begin{bmatrix} w_{1}^{T}h(x_{i})\\ w_{2}^{T}h(x_{i})\\ \vdots\\ w_{k-1}^{T}h(x_{i})\\ \end{bmatrix}\right\|=\mathbb{E}\sup_{\epsilon}\limits_{\|W_{d}\|_{F}\leq R_{d },h\in\tilde{\mathcal{H}}_{d-1}}\left\|\frac{1}{m}\sum_{i=1}^{m}\epsilon_{i} W_{d}h(x_{i})\right\| \tag{33}\] The RHS of Eq. 33 can be bounded using similar strategies that can be found in Lemma 3.1 of Golowich et al. (2018). Subsequently, we rewrite the expression inside the supremum and expectation of the first term as follows. \[\sqrt{\sum_{j=1}^{k}\lVert w_{j}\rVert^{2}\bigg{(}\sum_{i=1}^{m}\bigg{(} \epsilon_{i}\frac{w_{j}^{T}}{\lVert w_{j}\rVert}h(x_{i})\bigg{)}\bigg{)}^{2}} \tag{34}\] Note that Eq. 34 is maximized when one of the \(\lVert w_{j}\rVert=R_{d}\) and the rest of \(\lVert w_{l}\rVert=0\) for \(l\neq j\) (this is simply because we maximize a positive linear function over \(\lVert w_{1}\rVert^{2},\lVert w_{2}\rVert^{2},\cdots,\lVert w_{h}\rVert^{2}\)). Therefore, it follows that \[\mathbb{E}\sup_{\epsilon}\limits_{\|W_{d}\|_{F}\leq R_{d},h\in\tilde{ \mathcal{H}}_{d-1}}\left\|\frac{1}{m}\sum_{i=1}^{m}\epsilon_{i}\begin{bmatrix} w_{1}^{T}h(x_{i})\\ w_{2}^{T}h(x_{i})\\ \vdots\\ w_{k-1}^{T}h(x_{i})\\ \end{bmatrix}\right\| =\mathbb{E}\sup_{\epsilon}\limits_{\|w\|\leq R_{d},h\in\tilde{ \mathcal{H}}_{d-1}}\frac{1}{m}\sum_{i=1}^{m}\epsilon_{i}w^{T}h(x_{i})\] \[\leq\mathbb{E}\sup_{\epsilon}\limits_{\|w\|\leq R_{d},h\in\tilde{ \mathcal{H}}_{d-1}}\frac{1}{m}\lVert w\rVert\bigg{\|}\sum_{i=1}^{m}\epsilon_{i} h(x_{i})\bigg{\|}\] \[=R_{d}\mathop{\mathbb{E}}_{\epsilon}\sup_{h\in\widetilde{\mathcal{H}}_{d-1}} \frac{1}{m}\bigg{\|}\sum_{i=1}^{m}\epsilon_{i}h(x_{i})\bigg{\|} \tag{35}\] The second term \(A_{2}\) can be rewritten in the following way. \[A_{2}=\mathop{\mathbb{E}}_{\epsilon}\sup_{\|W_{d}\|_{F}\leq R_{d},h\in \widetilde{\mathcal{H}}_{d-1}}\left\|\frac{1}{2m}\sum_{i=1}^{m}\epsilon_{i} \begin{bmatrix}|w_{1}^{T}h(x_{i})+w_{2}^{T}h(x_{i})|\\ |w_{1}^{T}h(x_{i})+w_{2}^{T}h(x_{i})|\\ \vdots\\ |w_{k-1}^{T}h(x_{i})+w_{k}^{T}h(x_{i})|\\ \end{bmatrix}\right\| \tag{36}\] Note that the negative signs in even coordinates can be removed since we take the magnitude of the vector, and removal of negative signs in even rows \(w_{2},w_{4},\cdots,w_{k}\) is possible by the symmetry of the set \(\|W_{d}\|_{F}\leq R_{d}\). Then, we rewrite the inner expression of Eq. 36 as follows. \[\frac{1}{2m}\sqrt{\sum_{j=1}^{k/2}2\|w_{2j}+w_{2j-1}\|^{2}\bigg{(}\sum_{i=1}^{ m}\epsilon_{i}\Big{|}\frac{(w_{2j}+w_{2j-1})^{T}}{\|w_{2j}+w_{2j-1}\|}h(x_{i}) \bigg{|}\bigg{)}^{2}} \tag{37}\] Using the triangle inequality property of norms, Eq. 37 can be bounded by \[\frac{1}{2m}\sqrt{\sum_{j=1}^{k/2}4\bigg{(}\|w_{2j}\|^{2}+\|w_{2j-1}\|^{2} \bigg{)}\bigg{(}\sum_{i=1}^{m}\epsilon_{i}\bigg{|}\frac{(w_{2j}+w_{2j-1})^{T}} {\|w_{2j}+w_{2j-1}\|}h(x_{i})\bigg{|}\bigg{)}^{2}} \tag{38}\] It is readily shown that Eq. 38 is maximized when \(\|w_{2j}\|^{2}+\|w_{2j-1}\|^{2}=R_{d}^{2}\) for some \(j\), by a similar argument to the one for \(A_{1}\). Thus, Eq. 38 can be further bounded by \[\frac{1}{2m}\sqrt{4R_{d}^{2}\bigg{(}\sum_{i=1}^{m}\epsilon_{i}\bigg{|}\frac{( w_{2j}+w_{2j-1})^{T}}{\|w_{2j}+w_{2j-1}\|}h(x_{i})\bigg{|}\bigg{)}^{2}}=\frac{R_{d} }{m}\bigg{|}\sum_{i=1}^{m}\epsilon_{i}\bigg{|}\frac{(w_{1}+w_{2})^{T}}{\|w_{1 }+w_{2}\|}h(x_{i})\bigg{|}\bigg{|} \tag{39}\] Therefore, we finally derive the bound for \(A_{2}\) defined in Eq. 32b as follows. \[A_{2} =\mathop{\mathbb{E}}_{\epsilon}\sup_{\|W_{d}\|_{F}\leq R_{d},h\in \widetilde{\mathcal{H}}_{d-1}}\left\|\frac{1}{2m}\sum_{i=1}^{m}\epsilon_{i} \begin{bmatrix}|w_{1}^{T}h(x_{i})+w_{2}^{T}h(x_{i})|\\ |w_{1}^{T}h(x_{i})+w_{2}^{T}h(x_{i})|\\ \vdots\\ |w_{k-1}^{T}h(x_{i})+w_{k}^{T}h(x_{i})|\\ \end{bmatrix}\right\|\] \[\leq\mathop{\mathbb{E}}_{\epsilon}\sup_{\|W_{d}\|_{F}\leq R_{d},h \in\widetilde{\mathcal{H}}_{d-1}}\frac{R_{d}}{m}\bigg{|}\sum_{i=1}^{m}\epsilon _{i}\bigg{|}\frac{(w_{1}+w_{2})^{T}}{\|w_{1}+w_{2}\|}h(x_{i})\bigg{|}\bigg{|}\] \[\leq R_{d}\mathop{\mathbb{E}}_{\epsilon}\sup_{\|W_{d}\|_{F}\leq R_ {d},h\in\widetilde{\mathcal{H}}_{d-1}}\frac{1}{m}\bigg{|}\sum_{i=1}^{m}\epsilon _{i}\frac{(w_{1}+w_{2})^{T}}{\|w_{1}+w_{2}\|}h(x_{i})\bigg{|}\qquad\text{by Talagrand's Contraction Lemma}\] \[\leq R_{d}\mathop{\mathbb{E}}_{\epsilon}\sup_{\|W_{d}\|_{F}\leq R_{d},h\in\tilde{\mathcal{H}}_{d-1}}\frac{1}{m}\left\|\frac{(w_{1}+w_{2})^{T}}{\|w_{ 1}+w_{2}\|}\right\|\left\|\sum_{i=1}^{m}\epsilon_{i}h(x_{i})\right\|\] \[\leq R_{d}\mathop{\mathbb{E}}_{\epsilon}\sup_{h\in\tilde{\mathcal{ H}}_{d-1}}\frac{1}{m}\left\|\sum_{i=1}^{m}\epsilon_{i}h(x_{i})\right\| \tag{40}\] In the second inequality, a slightly different version of Talagrand's Contraction Lemma is used, which can be found in Mohri et al. (2018). Combining \(A_{1}\) and \(A_{2}\), we derive Eq. 29 as follows. \[\mathop{\mathbb{E}}_{\epsilon}\sup_{h\in\tilde{\mathcal{H}}_{d}} \left\|\frac{1}{m}\sum_{i=1}^{m}\epsilon_{i}h(x_{i})\right\| \leq A_{1}+A_{2}\] \[\leq R_{d}\mathop{\mathbb{E}}_{\epsilon}\sup_{h\in\tilde{ \mathcal{H}}_{d-1}}\frac{1}{m}\left\|\sum_{i=1}^{m}\epsilon_{i}h(x_{i}) \right\|+R_{d}\mathop{\mathbb{E}}_{\epsilon}\sup_{h\in\tilde{\mathcal{H}}_{d- 1}}\frac{1}{m}\left\|\sum_{i=1}^{m}\epsilon_{i}h(x_{i})\right\|\] \[=2R_{d}\mathop{\mathbb{E}}_{\epsilon}\sup_{h\in\tilde{\mathcal{H} }_{d-1}}\frac{1}{m}\left\|\sum_{i=1}^{m}\epsilon_{i}h(x_{i})\right\| \tag{41}\] This completes the proof. **Theorem 6**.: _Assume that \(d\geq 1\) and \(\mathcal{H}_{d}\) is the hypothesis class of real-valued functions defined in Definition 12. Suppose that \(\mathcal{X}\subset\mathbb{R}^{d_{x}}\) is a bounded subset such that for all \(x\in\mathcal{X},\|x\|\leq B\). For any set of \(m\) training samples \(S=\{x_{1},x_{2},\cdots,x_{m}\}\), we have_ \[\mathcal{R}_{S}(\mathcal{H}_{d})\leq\frac{B}{\sqrt{m}}2^{d-1}\Pi_{i=1}^{d}R_{i} \tag{42}\] _where \(\|W_{i}\|_{F}\leq R_{i}\) for each weight matrix in \(\mathcal{H}_{d}\)._ Proof.: Since the proof is very similar to that in Golowich et al. (2018), we provide a proof sketch only for clarity. Note that \[\mathcal{R}_{S}(\mathcal{H}_{d})=\mathop{\mathbb{E}}_{\epsilon}\sup_{h\in \mathcal{H}_{d}}\frac{1}{m}\sum_{i=1}^{m}\epsilon_{i}h(x_{i})=\mathop{ \mathbb{E}}_{\epsilon}\sup_{\|W_{d}\|\leq R_{d},h\in\tilde{\mathcal{H}}_{d-1}} \frac{1}{m}\sum_{i=1}^{m}\epsilon_{i}W_{d}h(x_{i}) \tag{43}\] Since \(W_{d}\) has only 1 row, it follows that \(\|W_{d}\|_{2}=\|W_{d}\|_{F}\) by Lemma 7. By the Cauchy-Schwarz inequality, the RHS of Eq. 42 can be bounded by: \[\mathop{\mathbb{E}}_{\epsilon}\sup_{\|W_{d}\|_{F}\leq R_{d},h\in\tilde{ \mathcal{H}}_{d-1}}\frac{1}{m}\sum_{i=1}^{m}\epsilon_{i}W_{d}h(x_{i})\leq \mathop{\mathbb{E}}_{\epsilon}\sup_{\|W_{d}\|_{2}\leq R_{d},h\in\tilde{ \mathcal{H}}_{d-1}}\|W_{d}\|_{2}\left\|\frac{1}{m}\sum_{i=1}^{m}\epsilon_{i}h(x _{i})\right\|\] \[\leq R_{d}\mathop{\mathbb{E}}\limits_{\epsilon}\sup_{h\in\tilde{\mathcal{H}}_{d-1} }\left\|\frac{1}{m}\sum_{i=1}^{m}\epsilon_{i}h(x_{i})\right\| \tag{44}\] By recursively applying Lemma 5, we derive the following inequality: \[R_{d}\mathop{\mathbb{E}}\limits_{\epsilon}\sup_{h\in\tilde{\mathcal{H}}_{d-1} }\left\|\frac{1}{m}\sum_{i=1}^{m}\epsilon_{i}h(x_{i})\right\|\leq(2^{d-1}\Pi_{ i=1}^{d}R_{i})\mathop{\mathbb{E}}\limits_{\epsilon}\left\|\frac{1}{m}\sum_{i=1}^{m} \epsilon_{i}x_{i}\right\| \tag{45}\] The expected value can then be bounded using Jensen's inequality: \[\mathop{\mathbb{E}}\limits_{\epsilon}\left\|\frac{1}{m}\sum_{i=1 }^{m}\epsilon_{i}x_{i}\right\| \leq\frac{1}{m}\sqrt{\mathop{\mathbb{E}}\limits_{\epsilon}\left\| \sum_{i=1}^{m}\epsilon_{i}x_{i}\right\|^{2}}\] \[=\frac{1}{m}\sqrt{\mathop{\mathbb{E}}\limits_{\epsilon}\sum_{i= 1}^{m}\sum_{j=1}^{m}\epsilon_{i}\epsilon_{j}x_{j}^{T}x_{i}}\] \[=\frac{1}{m}\sqrt{\mathop{\mathbb{E}}\limits_{\epsilon}m\|x\|^{ 2}}=\frac{B}{\sqrt{m}} \tag{46}\] Therefore, the following inequality is derived, and this completes the proof. \[\mathcal{R}_{S}(\mathcal{H}_{d})\leq\frac{B}{\sqrt{m}}2^{d-1}\Pi_{i=1}^{d}R_{i} \tag{47}\] Theorem 6 develops the upper bound for the ERC of GroupSort NNs. Based on the results derived in Theorem 6, we subsequently derive the bound for the ERC of LCNNs. We begin with a lemma that relates the Frobenius norm to the spectral norm. _Lemma 7_.: Given a weight matrix \(W\in\mathbb{R}^{m\times n}\), the following inequality holds: \[\|W\|_{2}\leq\|W\|_{F}\leq\min(m,n)\|W\|_{2} \tag{48}\] The proof can be readily obtained based on the fact that \(\|W\|_{F}\) is the norm of the vector consisting of all singular values of \(W\), and the spectral norm is simply the largest singular value. Using this lemma, we develop the following bound for \(\mathcal{R}_{S}(\mathcal{H}_{d}^{SD})\), where \(\mathcal{H}_{d}^{SD}\) is the hypothesis class of LCNNs using SpectralDense layers of depth \(d\). **Corollary 1**.: _Let \(\mathcal{H}_{d}^{SD}\) be the real-valued function hypothesis class defined in Definition 13 with \(d\geq 1\). Suppose that \(\mathcal{X}\subset\mathbb{R}^{d_{x}}\) is a bounded subset such that for all \(x\in\mathcal{X},\|x\|\leq B\). For any set of \(m\) training samples \(S=\{x_{1},x_{2},\cdots,x_{m}\}\), we have the following inequality:_ \[\mathcal{R}_{S}(\mathcal{H}_{d}^{SD})\leq\frac{B}{\sqrt{m}}2^{d-1}\Pi_{i=1}^{ d}\min(m_{i},n_{i})=\frac{B}{\sqrt{m}}2^{d-1}\Pi_{i=1}^{d-1}\min(m_{i},n_{i}) \tag{49}\] _where \(m_{i}\) and \(n_{i}\) are the number of rows and columns of the \(i^{th}\) weight matrix \(W_{i}\)._ Proof.: The proof can be readily obtained using the fact that \(\|W_{i}\|_{2}=1\) for all functions in \(\mathcal{H}_{d}^{SD}\) and Lemma 7 (i.e., \(\|W_{i}\|_{F}\leq\min(m_{i},n_{i})\|W_{i}\|_{2}=\min(m_{i},n_{i})\)). Then, by substituting this inequality into Theorem 6, we derive the inequality in Eq. 49. The last equality immediately follows since \(m_{d}=1\), and this completes the proof. _Remark 5_.: Note that the bound derived in Eq. 49 is a completely size-dependent bound for the ERC of \(\mathcal{H}_{d}^{SD}\). Therefore, once the architecture of the LCNNs using SpectralDense layers has been decided, the ERC of the set of LCNNs is bounded by a constant, which only depends on the neurons in each layer, and most importantly, not on the choice of weights in each weight matrix. Subsequently, using the vector contraction inequality in Lemma 4, we derive the following corollary that generalizes the results to the multi-dimensional output case. **Corollary 2**.: _Suppose that \(\mathcal{X}\subset\mathbb{R}^{d_{x}}\) is a bounded subset such that for all \(x\in\mathcal{X},\|x\|\leq B\). Let \(\mathcal{G}\) be the hypothesis class of loss functions defined in Eq. 21, where \(\mathcal{H}\) is the hypothesis class of functions from \(\mathcal{X}\) to \(\mathcal{Y}\subset\mathbb{R}^{d_{y}}\) with each component function \(h_{k}\in\mathcal{H}_{d}^{SD}\) for \(k=1,...,d_{y}\), and \(L:\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}^{\geq 0}\) is a \(L_{r}\)-Lipschitz loss function that satisfies the properties in Section 5.1. Then we have_ \[\mathcal{R}_{S}(\mathcal{G})\leq\frac{\sqrt{2}d_{y}L_{r}B}{\sqrt{m}}2^{d-1}\Pi _{i=1}^{d-1}\min(m_{i},n_{i}) \tag{50}\] _where \(m_{i}\) and \(n_{i}\) are the number of rows and columns in \(W_{i}\)._ Proof.: The proof of Eq. 50 can be readily obtained by applying Corollary 1 to Eq. 23 in Lemma 4 and using Eq. 24 as follows. \[\mathcal{R}_{S}(\mathcal{G})=\mathbb{E}\sup_{\epsilon}\frac{1}{n} \sum_{h\in\mathcal{H}}^{m}\frac{1}{m}\sum_{i=1}^{m}\epsilon_{i}L(h(x_{i}),y) \leq\sqrt{2}L_{r}\mathbb{E}\sup_{\epsilon}\frac{1}{m}\sum_{i=1}^{m} \sum_{k=1}^{d_{y}}\epsilon_{ik}h_{k}(x_{i})\] \[\leq\sqrt{2}L_{r}\sum_{k=1}^{d_{y}}\mathbb{E}\sup_{h_{k}\in \mathcal{H}_{d}^{SD}}\frac{1}{m}\sum_{i=1}^{m}\epsilon_{ik}h_{k}(x_{i})\] \[=\sqrt{2}L_{r}d_{y}\,\mathbb{E}\sup_{h_{k}\in\mathcal{H}_{d}^{SD} }\frac{1}{m}\sum_{i=1}^{m}\epsilon_{ik}h_{k}(x_{i})\] \[=\frac{\sqrt{2}d_{y}L_{r}B}{\sqrt{m}}2^{d-1}\Pi_{i=1}^{d-1}\min( m_{i},n_{i}) \tag{51}\] Finally, we develop the following theorem for the generalization error bound of LCNNs using SpectralDense layers as an immediate result of Corollary 2. **Theorem 8**.: _Suppose that \(\mathcal{X}\subset\mathbb{R}^{d_{x}}\) is a bounded subset such that for all \(x\in\mathcal{X},\|x\|\leq B\). Let \(\mathcal{G}\) be the hypothesis class of loss functions defined in Eq. 21, where \(\mathcal{H}\) is the hypothesis class of functions from \(\mathcal{X}\) to \(\mathcal{Y}\subset\mathbb{R}^{d_{y}}\) with each component function \(h_{k}\in\mathcal{H}_{d}^{SD}\) for \(k=1,...,d_{y}\), and \(L:\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}^{\geq 0}\) is a \(L_{r}\)-Lipschitz loss function that satisfies the properties in Section 5.1. For any set of \(m\) i.i.d. training samples \(S=\{s_{1},s_{2},\cdots,s_{m}\}\) with \(s_{i}=(x_{i},y_{i})\), \(i\in[1,m]\) drawn from a probability distribution \(\mathcal{D}\subset\mathcal{X}\times\mathcal{Y}\), then for any \(\delta\in(0,1)\), the following upper bound holds with probability \(1-\delta\) :_ \[\underset{(x,y)\sim\mathcal{D}}{E}[L(h(x),y)]\leq\frac{1}{m}\sum_{i=1}^{m}L(h( x_{i}),y_{i})+\frac{\sqrt{2}d_{y}L_{r}B}{\sqrt{m}}2^{d}\,\Pi_{i=1}^{d-1}\min(m_{i},n_ {i})+3M\sqrt{\frac{\log\frac{1}{\delta}}{2m}} \tag{52}\] _where \(m_{i}\) and \(n_{i}\) are the number of rows and columns in \(W_{i}\)._ Proof.: The proof of Eq. 52 follows immediately from substituting Eq. 50 (i.e., the upper bound for \(\mathcal{R}_{S}(\mathcal{G})\)) from Corollary 2 into Eq. 22 in Theorem 3. ### Comparison of Empirical Rademacher Complexity Bounds The bound in Corollary 1 shows that the ERC of LCNNs using SpectralDense layers is simply bounded by a constant that depends on the size of the network (i.e., the number of neurons in each layer and the depth of the network). However, if we consider the ERC of the class of conventional FNNs with 1-Lipschitz activation functions of depth \(d\), denoted by \(\mathcal{R}_{S}(\mathcal{H}_{d}^{D})\), we have the following bound by Golowich et al. (2018) : \[\mathcal{R}_{S}(\mathcal{H}_{d}^{D})\leq\frac{B(\sqrt{2\log(2)d})}{\sqrt{m}}\, \Pi_{i=1}^{d}R_{i} \tag{53}\] A slight advantage of this bound is that we have an \(O(\sqrt{d})\) dependence on the depth, while the bound in Corollary 1 has an exponential dependence on \(d\). However, the main advantage of the bound in Corollary 1 is that it is only dependent on the size of the network, i.e., the number of neurons in each layer. For conventional FNNs with no constraints on the norm of the weight matrices, even though the training error could be rendered sufficiently small, the term \(\Pi_{i=1}^{d}R_{i}\) is not controlled, which could result in a sufficiently large ERC for the class \(\mathcal{H}_{d}^{D}\), and subsequently a large generalization error. Furthermore, given an FNN using conventional dense layers and an LCNN using SpectralDense layers of the same size parameters, we have demonstrated that the ERC of LCNNs using SpectralDense layers is bounded by a fixed constant, while there is no constraint on the ERC bound of the FNNs using dense layers. This is a tremendous advantage since all the parameters in Eq. 52 in Theorem 8 are fixed except the training sample size \(m\) as long as our target function is Lipschitz continuous and the neural network architecture is fixed. This implies that there is a provably correct probabilistic guarantee that the generalization error is at most \(O(\frac{1}{\sqrt{m}})\). Therefore, it is demonstrated that LCNNs using SpectralDense layers can not only approximate a wide variety of nonlinear functions (i.e., the universal approximation theorem in Section 4), but also effectively mitigate the issue of over-fitting to data noise and exhibit better generalization properties, since LCNNs improve the robustness against data noise as compared to conventional FNNs and are developed with reduced model complexity. ## 6 Application of LCNNs to Predictive Control of a Chemical Process In order to show the robustness of LCNNs, we develop LCNNs using SpectralDense layers and incorporate the LCNNs into a model predictive controller (MPC). We will demonstrate that LCNNs using SpectralDense layers are able to accurately model the nonlinear dynamics of a chemical process, and furthermore, they are able to prevent over-fitting in the presence of data noise. ### Chemical Process Description The chemical process in consideration, which was first developed by Wu et al. (2019), is a non-isothermal, well-mixed CSTR that contains an exothermic second-order irreversible reaction that converts molecule \(A\) to molecule \(B\). There is a feed stream of \(A\) into the CSTR, as well as a jacket that either supplies heat or cools the CSTR. We denote \(C_{A}\) to be the concentration of \(A\) and \(T\) the temperature of the CSTR. The CSTR dynamics can be modeled with the governing equations as follows: \[\begin{split}\frac{dC_{A}}{dt}&=\frac{F}{V}(C_{A0} -C_{A})-k_{0}e^{-\frac{E}{RT}}C_{A}^{2}\\ \frac{dT}{dt}&=\frac{F}{V}(T_{0}-T)-\frac{\Delta H}{ \rho_{L}C_{p}}k_{0}e^{-\frac{E}{RT}}C_{A}^{2}+\frac{Q}{\rho_{L}C_{p}V}\end{split} \tag{54}\] where \(F\) is the feed flowrate, \(\Delta H\) is the molar enthalpy of reaction, \(k_{0}\) is the rate constant, \(R\) is the ideal gas constant, \(\rho_{L}\) is the fluid density, \(E\) is the activation energy, and \(C_{p}\) is the specific heat capacity. \(C_{A0}\) is the feed flow concentration of \(A\), and \(Q\) is the rate at which heat is transferred to the CSTR. The values of the process parameters used in this work are omitted, as they are exactly the same as those in Wu et al. (2019). The CSTR of Eq. 54 has an unstable steady-state \([C_{As},T_{s},C_{A0_{s}},Q_{s}]=[1.95\text{ mol }/\text{ dm}^{3},402\text{ K},4\text{ mol }/\text{ dm}^{3},0\text{ kJ }/\text{ h}]\). We use \(x\) to denote the system states: \(x^{T}=[C_{A}-C_{As},T-T_{s}]\) and \(u\) to denote the manipulated inputs: \(u^{T}=[C_{A0}-C_{A0_{s}},Q-Q_{s}]\), so that the equilibrium point \((x_{s},u_{s})\) is located at the origin. Following the formulation presented in Section 2.2, after extensive numerical simulations, a Lyapunov function \(V(x):=x^{T}Px\) with \(P=\left[\begin{array}{cc}1060&22\\ 22&0.52\end{array}\right]\) was constructed. The region \(\Omega_{\rho}\) with \(\rho=372\) is found via open-loop simulations by sweeping through many feasible initial conditions within the domain space \(D\), such that for any initial state \(x_{0}\in\Omega_{\rho}\), the controller \(\Phi\) renders the origin of the state space exponentially stable. ### Data Generation In order to train the LCNNs, we conducted open-loop simulations of the CSTR dynamics shown in (54) using many different possible control actions to generate the required dataset. Specifically, we performed a sweep of all possible initial states \(x_{0}\in\Omega_{\rho}\) and control actions \(u\in U\). The forward Euler method with step size \(h_{c}=10^{-5}\) hr was used to obtain the value of \(\tilde{F}(x_{0},u)\), that is, to deduce the state after \(\Delta=10^{-3}\) hr time has passed. The inputs in the dataset will be all such possible pairs \((x_{0},u)\) and the outputs will be \(\tilde{F}(x_{0},u)\). We collected 20000 input-output pairs in the dataset, and split it into training (52.5 %), validation (17.5 %), and testing (30 %) datasets. Before the training process, the dataset was pre-processed appropriately by standard scalers to ensure that each variable has a variance of the same order of magnitude. ### Model Training With Noise-Free Data For the prediction model \(\tilde{F}_{nn}\) that approximates the function \(\tilde{F}\), we first demonstrate that in the absence of training data noise, the LCNNs using SpectralDense layers can capture the dynamics of \(\tilde{F}_{nn}\) well in the operating region \(\Omega_{\rho}\). Specifically, we used an LCNN using two SpectralDense layers with 40 neurons each, followed by a dense layer with weights that have absolute values bounded by 1.0 and linear activation functions as the final layer. The weight bound was implemented using the _max_norm_ constraint function provided by Tensorflow. The neural network was trained using the Adam Optimizer package provided by Tensorflow. The loss function that was utilized to train the neural networks was the mean squared error (MSE), and the testing error for the LCNN reached \(2.83\times 10^{-5}\), which was considered sufficiently small using normalized training data. ### Lyapunov-based MPC Subsequently, we incorporate the LCNN model into the design of Lyapunov-based MPC (LMPC). The control actions in LMPC are implemented using the sample-and-hold method, where \(\Delta\) is the sampling period. Let \(N\) be a positive integer that represents the prediction horizon of the MPC. The state at time \(t=k\Delta\) is denoted by \(x_{k}\), and the LMPC using LCNN model corresponds to the optimization problem described below, following the notation in Wu et al. (2019): \[\mathcal{J}=\min_{\tilde{u}}\sum_{i=1}^{N}L(\tilde{x}_{k+i},u_{k+ i-1})\] (55a) s.t. \[\tilde{x}_{t+1}=\tilde{F}_{nn}(\tilde{x}_{t},u_{t}),\ \ \forall t\in[k,k+N) \tag{55b}\] \[u_{t}\in U,\ \ \forall t\in[k,k+N)\] (55c) \[\tilde{x}_{k}=x_{k} \tag{55d}\] \[V(\tilde{F}_{nn}(x_{k},u_{k}))\leq V(\tilde{F}_{nn}(x_{k},\Phi(x_{k}))), \quad\text{if}\;\;x_{k}\in\Omega_{\rho}\setminus\Omega_{\rho_{nn}} \tag{55e}\] \[V(x_{t})\leq\rho_{nn}\;\;\forall t\in[k,k+N],\;\;\text{if}\;\;x_{k} \in\Omega_{\rho_{nn}} \tag{55f}\] where \(\tilde{u}=[u_{k},u_{k+1},u_{k+2},....,u_{k+N-1}]\), and \(\Omega_{\rho_{nn}}\) is a much smaller sublevel set than \(\Omega_{\rho}\). The state predicted by the LCNN model \(\tilde{F}_{nn}(\tilde{x}_{t},u_{t})\) is represented by \(\tilde{x}\). Eq. 55c describes the input constraints imposed, since we assume that \(U\) is the set of all possible input constraints. The initial condition of the prediction model is obtained from the feedback state measurement at \(t=k\Delta\), as shown in Eq. 55d. The constraints of Eqs. 55e-55f guarantee closed-loop stability, i.e., it ensures that the state which is originally in the set \(\Omega_{\rho}\) will eventually converge to the much smaller sublevel set \(\Omega_{\rho_{nn}}\), provided that a sufficiently small \(\Delta>0\) is used such that the controller \(\Phi\) when applied with the sample-and-hold method still guarantees convergence to the origin. Note that the key difference between the LMPC of Eq. 55 in this work and the one in our previous work Wu et al. (2019) is that the LCNN model, which is a type of feedforward neural network, is used as the underlying model for prediction of future states in Eq. 55b to predict the state one sampling time forward, while in Wu et al. (2019), a recurrent neural network was developed to predict the trajectory of future states within one sampling period. Therefore, the formulated objective function shown in Eq. 55a and the constraints of Eqs. 55e-55f only account for the predicted states in the sampling instance. The above optimization problem is solved using IPOPT, which is a package for solving large-scale nonlinear and non-convex optimization problems. The LMPC of Eq. 55 for the CSTR example is designed with the following parameters: \(\rho=372\), \(\rho_{nn}=2\), \(N=2\), and \(L(x,u)=x^{T}Q_{1}x+u^{T}Q_{2}u\), where \(Q_{1}=\left[\begin{array}{cc}6.25\times 10^{-4}&0\\ 0&1\end{array}\right]\) and \(Q_{2}=\left[\begin{array}{cc}0.01&0\\ 0&4.0\times 10^{-12}\end{array}\right]\). ### Closed-Loop Simulation Results The closed-loop simulation results under LMPC using LCNN models (termed LCNN-LMPC) are shown in Figure 1 and Figure 2, where the initial condition is \(x=(72\text{ K}\;,\;-1.65\;\text{ kmol}\;/\;\text{m}^{3})\). The closed-loop simulation under LMPC using the first-principles model of Eq. 54 is also carried out as the reference for comparison purposes. As shown in Figure 1, the state trajectory when then LCNN is used overlaps with the trajectory when the first-principles model is used in MPC, suggesting that the neural network modeling error is sufficiently small such that the state can be driven to the equilibrium point under MPC. The control actions taken by the two models are similar as shown in Fig. 2, where slight deviations and oscillations occur under the MPC using LCNN model. The oscillations in the manipulated input profile could be due to the following factors: 1) slight model discrepancies between the model and the first-principles model, and 2) the IPOPT software being trapped within local minima of the objective function, or a combination of the two factors (in fact, the first factor might inadvertently cause the second because of irregularities in the LMPC loss function \(L\) when LCNN is used). Nevertheless, the LCNN is able to drive the state to the origin effectively, which shows that the LCNN can effectively model the nonlinear dynamics of the CSTR. _Remark 6_.: The weight bound of 1.0 in the last layer was chosen to ensure that the class of functions that can be represented using this neural network architecture is sufficiently large to contain the target function. If a smaller weight bound is chosen, the target function might not be approximated well, since the Lipschitz constant of the network does not meet the one for the target function. ### Robustness against Data Noise As discussed in Section 5, one of the most crucial advantages of LCNNs using SpectralDense layers is their robustness against data noise and potential over-fitting during the training process. For example, when the number of neurons per hidden layer is exceptionally large, the neural network tends to overestimate the complexity of the problem and ultimately learns the data noise (see Ke and Liu (2008) and Sheela et al. (2013) for more details on this phenomenon). Therefore, in this subsection, we will demonstrate that when Gaussian data noise is introduced into the training datasets, the LCNNs outperform the FNNs using conventional dense layers (termed "Dense FNNs"). We followed the data generation process in Section 6.2, and added Gaussian noise with a standard deviation of 0.1 or 0.2 to the training dataset. To show the difference between SpectralDense LCNNs and conventional Dense FNNs in robustness against over-fitting, we trained SpectralDense LCNNs and Dense FNNs with the same set of hidden layer architectures. Specifically, we used two hidden layers in each type of neural network with 640 or 1280 neurons each. The LCNNs were developed with SpectralDense hidden layers, while the conventional Dense FNNs were developed using the dense layers from Tensorflow with ReLU activation functions. Throughout the training process for all the networks, the same training hyperparameters were used, such as the number of epochs, the early stopping callback, and the batch size used in the Adam Optimizer. Table 1 shows the testing errors for the various neural networks trained. As seen in Table 1, the testing errors of the conventional Dense FNNs have an order of magnitude of \(10^{-3}\) to \(10^{-2}\), which are significantly larger than those of SpectralDense LCNNs (i.e., the testing errors of SpectralDense LCNNs have an order of magnitude of \(10^{-5}\) to \(10^{-4}\)). The increase in testing error in the Dense FNNs is due to over-fitting of the noise since the testing error has the same order of magnitude as the variance of the Gaussian noise. Additionally, we integrated the LCNN and Dense FNN with 640 neurons per layer and 0.1 standard deviation Gaussian Noise into MPC, similar to the process described in Section 6.5. The results when both the LCNNs and Dense FNNs are integrated into MPC are shown in Figure 3 and 4. From the plot of the Lyapunov function value \(V(x)\) in Figure 3, it is demonstrated that the Dense FNN is unable to effectively drive the state to the origin compared to the LCNN. This is readily observed because not only is the Lyapunov function value for the Dense FNN much higher, but there are also considerable oscillations in the function value, especially in the time frame between 0.15 hr and 0.3 hr. In addition, in Figure 4, it is observed that, while the predicted control actions of the LCNN are very similar to those of the first-principles model, the predicted control actions under the Dense FNN show large oscillations and differ largely from those of the first-principles model. This large disparity between the Dense FNN and the first-principles model shows that the Dense FNN has become incapable of accurately modeling the process dynamics when embedded into MPC. ### Comparison of Lipschitz Constants between LCNNs and Dense FNNs Additionally, we compare the Lipschitz constants of the LCNNs and the Dense FNNs developed for the CSTR of Eq. 54 and demonstrate that the conventional Dense FNNs have a much larger Lipschitz constant than the SpectralDense LCNNs as a result of noise over-fitting and the lack of constraints on the weight matrices. For the Dense FNNs, we used the LipBaB algorithm to obtain the Lipschitz constant for the FNNs using dense layers, and for the SpectralDense LCNNs, we took the SVD of the last weight matrix and obtained the spectral norm of the last weight matrix as the upper bound of the Lipschitz constant. The results are shown in Table 2. The Dense FNNs have Lipschitz constants in the order of magnitude \(10^{2}\) compared to SpectralDense LCNNs with Lipschitz constants in the order of magnitude \(10^{0}\). The comparison of Lipschitz constants demonstrates that the LCNNs are also provably and certainly less sensitive to input perturbations as compared to the Dense FNNs since they have a Lipschitz constant several orders of magnitude lower. Furthermore, the calculation of Lipschitz constants demonstrates that LCNNs are able to prevent over-fitting data noise by maintaining a small Lipschitz constant, while conventional Dense FNNs with a large number of neurons could over-fit data noise. ## 7 Conclusions In this work, we developed LCNNs for the general class of nonlinear systems, and discussed how LCNNs using SpectralDense layers can mitigate sensitivity issues and prevent over-fitting to noisy data from the perspectives of Lipschitz constants and generalization error. Specifically, we first proved the universal approximation theorem for LCNNs using SpectralDense layers to demonstrate that LCNNs are capable of retaining expressive power for Lipschitz target functions despite having a small hypothesis class. Then, we derived the generalization error bound for SpectralDense LCNNs using the Rademacher complexity method. The above results provided the theoretical foundations to demonstrate that LCNNs can improve input sensitivity due to their constrained Lipschitz constant and generalize better to prevent over-fitting. Finally, LCNNs using SpectralDense layers were integrated into MPC and applied to a chemical reactor example. The simulations show that the LCNNs effectively captured the process dynamics and outperformed conventional Dense FNNs in terms of smaller testing errors, higher prediction accuracy in MPC, and smaller Lipschitz constants in the presence of noisy training data. ## 8 Acknowledgments Financial support from the NUS Start-up grant R-279-000-656-731 is gratefully acknowledged.
2302.02139
Structural Explanations for Graph Neural Networks using HSIC
Graph neural networks (GNNs) are a type of neural model that tackle graphical tasks in an end-to-end manner. Recently, GNNs have been receiving increased attention in machine learning and data mining communities because of the higher performance they achieve in various tasks, including graph classification, link prediction, and recommendation. However, the complicated dynamics of GNNs make it difficult to understand which parts of the graph features contribute more strongly to the predictions. To handle the interpretability issues, recently, various GNN explanation methods have been proposed. In this study, a flexible model agnostic explanation method is proposed to detect significant structures in graphs using the Hilbert-Schmidt independence criterion (HSIC), which captures the nonlinear dependency between two variables through kernels. More specifically, we extend the GraphLIME method for node explanation with a group lasso and a fused lasso-based node explanation method. The group and fused regularization with GraphLIME enables the interpretation of GNNs in substructure units. Then, we show that the proposed approach can be used for the explanation of sequential graph classification tasks. Through experiments, it is demonstrated that our method can identify crucial structures in a target graph in various settings.
Ayato Toyokuni, Makoto Yamada
2023-02-04T09:46:47Z
http://arxiv.org/abs/2302.02139v1
# Structural Explanations for Graph Neural Networks using HSIC ###### Abstract Graph neural networks (GNNs) are a type of neural model that tackle graphical tasks in an end-to-end manner. Recently, GNNs have been receiving increased attention in machine learning and data mining communities because of the higher performance they achieve in various tasks, including graph classification, link prediction, and recommendation. However, the complicated dynamics of GNNs make it difficult to understand which parts of the graph features contribute more strongly to the predictions. To handle the interpretability issues, recently, various GNN explanation methods have been proposed. In this study, a flexible model agnostic explanation method is proposed to detect significant structures in graphs using the Hilbert-Schmidt independence criterion (HSIC), which captures the nonlinear dependency between two variables through kernels. More specifically, we extend the GraphLIME method for node explanation with a group lasso and a fused lasso-based node explanation method. The group and fused regularization with GraphLIME enables the interpretation of GNNs in substructure units. Then, we show that the proposed approach can be used for the explanation of sequential graph classification tasks. Through experiments, it is demonstrated that our method can identify crucial structures in a target graph in various settings. ## 1 Introduction Graph neural networks (GNNs), which are neural networks applied to graph tasks, have shown excellent performance in node classification, link prediction, and graph classification [1]. However, the complexity of GNNs makes it difficult to clarify which structures of a given graph are crucial in GNN prediction. Thus, many methods have been proposed to interpret a trained GNN [2]. The widely used GNN explanation method is GNNExplainer [3], which optimizes an edge mask such that the mutual information between the masked graph and the prediction is maximized when the size of the edge mask is constrained. PGExplainer [4] learns a neural network that outputs an edge mask under the condition that each edge parameter follows a Bernoulli distribution independently. SubgraphX [5] efficiently explores subgraphs using a Monte Carlo tree search to identify structures that play an important role in the prediction. These approaches can explain either or both important features and nodes. However, because these methods are based on non-convex optimization, a global optimal solution may not be obtained. GraphLIME [6] is an extention of the novel LIME method [7] for GNN, extracting the "dimensions of the node feature of a target node" using HSIC Lasso [8, 9], which is one of the supervised feature selection methods measuring the non-linear dependency between features and model outputs, in a locally interpretable manner. Because the HSIC lasso method is a convex method, it can obtain a global optimal solution. However, GraphLIME can only select important features and not interpret important structures. In GNNs, node interpretation is generally more important than explaining the important features. This is a major limitation of GraphLIME. In this study, we propose **S**tructural**GraphLIME**, which extends GraphLIME to handle graph-level tasks and select significant subgraphs. More specifically, we employ HSIC Lasso to solve the node selection problem using the perturbation of graphs. To extract subgraphs, we use the group and fused regularization. Finally, thanks to the HSIC Lasso formulation, the proposed method can also explain the graph series classification model. The key advantage of the proposed method is that it inherits the merits of GraphLIME and can be used for node and subgraph explanation tasks. Experiments demonstrate that the method can determine the significant structure of a given graph / graph series for a trained GNN. **Contribution**: * The proposed StGraphLIME extends GraphLIME to select important nodes and sub-graphs. * A method for graph series classification is proposed as a novel task. * Through experiments, it is demonstrated that StGraphLIME can successfully explain the node and sub-graphs for various synthetic and real-world tasks. ## 2 Preliminary In this section, first the GNN explanation problem is formulated and then the GraphLIME method is introduced. ### Problem formulation For graph \(G=(E,V,\mathbf{X})\), \(V\) denotes the set of nodes in \(G\); \(E\) denotes the set of edges in \(G\); and \(\mathbf{X}\) denotes the node feature matrix of \(G\). The goal of this study is to propose explainable machine-learning models for the following GNN tasks. **GNN explanation for graph classification.** A GNN \(\Phi\) in graph classification learns the parameters using a training dataset \(\{(G_{i},y_{i})\}_{i=1}^{N}\), where \(G_{i}=(V_{i},E_{i},\mathbf{X}_{i})\) represents an instance graph, and \(y_{i}\) represents the label of \(G_{i}\), which are then used to estimate the label of an unseen graph. Given a trained GNN \(\Phi\) and target graph \(G=(V,E,\mathbf{X})\) as an instance, the goal is to obtain a significant node set. **GNN explanation for graph series classification.** A GNN \(\Phi\) in graph series classification learns the parameters using a training dataset \(\{(\{G_{i}^{t}\}_{t=1}^{T},y_{i})\}_{i=1}^{N}\), where \(\{G_{i}^{t}\}_{t=1}^{T}\) represents an instance graph series of length \(T\), and \(y_{i}\) represents the label of \(G_{i}\), which are then used to estimate the label of an unseen graph series. Given a trained GNN \(\Phi\) and target graph series \(\{G\}_{t=1}^{T}=\{(V^{t},E^{t},\mathbf{X}^{t})\}_{i=1}^{T}\) as an instance, the goal is to obtain a significant pair of node sets and time points. Note that finding significant node sets is a standard GNN explanation task, whereas graph series classification is a new GNN explanation task. ### GraphLIME GraphLIME is one of the methods used to interpret GNN prediction for node classification. Let us denote \(\mathbf{X}^{(v)}\in\mathbb{R}^{d\times n}\) as the feature vectors sampled to explain node \(v\), and let \(\mathbf{y}^{(v)}\in\mathbb{R}^{n}\) be the output of the GNN model \(\Phi\). The sampled dataset can also be written as \(\mathcal{D}_{v}=\{(\mathbf{x}_{i}^{(v)},y_{i}^{(v)})\}_{i=1}^{n}\), where \(\mathbf{X}^{(v)}=[\mathbf{x}_{1}^{(v)},\ldots,\mathbf{x}_{n}^{(v)}]\), and \(\mathbf{y}^{(v)}=(y_{1}^{(v)},y_{2}^{(v)},\ldots,y_{n}^{(v)})^{\top}\). In the original paper, the \(N\)-hop network neighbors were considered to sample the neighboring nodes of a given node \(v\). GraphLIME then selects important node features from \(\mathcal{D}_{v}\). More specifically, GraphLIME employs the HSIC lasso [8] to select important features [6], where the HSIC lasso was originally proposed to select important features of input \(\mathbf{x}\) using a supervised dataset [8]. The optimization problem of HSIC lasso is given as follows: \[\min_{\boldsymbol{\alpha}\in\mathbb{R}^{d}} \bigg{\{}\frac{1}{2}\|\overline{\mathbf{L}}-\sum_{s=1}^{d}\alpha _{s}\overline{\mathbf{K}}_{s}\|_{\mathrm{F}}^{2}+\lambda\sum_{s=1}|\alpha_{s}| \bigg{\}},\] s.t. \[\alpha_{s}\geq 0\ (\forall s),\] where \(\lambda\) denotes a regularization parameter and \(\overline{\mathbf{L}}=\mathbf{HLH}/\|\mathbf{HLH}\|_{\mathrm{F}}\) is the normalized Gram matrix of \(\mathbf{y}\), \([\mathbf{L}]_{i,j}=s(y_{i}^{(v)},y_{j}^{(v)})\) is a kernel for \(\mathbf{y}^{(v)}\), \(\overline{\mathbf{K}}_{s}=\mathbf{HK}_{s}\mathbf{H}/\|\mathbf{HK}_{s}\mathbf{ H}\|_{\mathrm{F}}\) is the normalized Gram matrix of \(s\)-th feature, \([\mathbf{K}_{s}]_{i,j}=k(\mathbf{x}_{i}^{(s)},\mathbf{x}_{j}^{(s)})\) is a kernel for the \(s\)-th feature vector \(\mathbf{x}_{i}^{(s)}\), and \(\mathbf{H}=\mathbf{I}_{n}-\frac{1}{n}\mathbf{1}_{n}\mathbf{1}_{n}^{\top}\) is the centering matrix. In GraphLIME, a Gaussian kernel is employed for both the input and output kernels. \[k(\mathbf{x}_{i}^{(k)},\mathbf{x}_{j}^{(k)}) =\exp\left(-\frac{(\mathbf{x}_{i}^{(k)}-\mathbf{x}_{j}^{(k)})^{2} }{2\sigma_{x}^{2}}\right),\] \[\ell(y_{i},y_{j}) =\exp\left(-\frac{\|y_{i}-y_{j}\|_{2}^{2}}{2\sigma_{y}^{2}}\right),\] where \(\sigma_{x}\) and \(\sigma_{y}\) are Gaussian kernel widths. Because HSIC lasso is a convex method, a global optimal solution can be obtained. Thus, HSIC lasso can obtain stable results compared to non-convex and discrete optimization methods. Through experiments, it has been reported that GraphLIME outperforms GNNExplainer in feature-explanation tasks [6]. However, GraphLIME can only explain features and cannot be used to explain nodes. In this study, GraphLIME is extended to explain nodes. ## 3 Proposed method In this section, we propose StGraphLIME, which is an extention of GraphLIME, to provide structural explanations for GNNs. ### L1-regularized node explanation In this study, we propose a method to select graph features (e.g., nodes and edges) that are heavily dependent on the prediction of \(\Phi\) using perturbation. Specifically, we first generate an auxiliary dataset \(\{(\hat{G}_{i},\Phi(\hat{G}_{i}))\}_{i=1}^{M}\) by adding perturbations to a target graph \(G\), where \(M\) is the number of perturbations. For perturbations, several nodes and edges can be removed from \(G\) or noise can be added to node features. Figure 1 shows an example of perturbation in graph \(G\). In this example, three perturbed graphs (\(M=3\)) are generated. Then HSIC lasso is used for the dataset to select nodes or edges that are significantly dependent on \(\Phi(G)\). Now, we consider the detection of important nodes from \(G\) is considered in graph classification. Let \(\mathbf{L}\) and \(\mathbf{K}_{v}\) be gram matrices for the prediction of \(\Phi\) and node \(v\in V\), respectively. Then, HSIC lasso for the explanations is formulated as follows: \[\min_{\boldsymbol{\alpha}\in\mathbb{R}^{|V|}} \bigg{\{}\frac{1}{2}\|\overline{\mathbf{L}}-\sum_{v\in V}\alpha _{v}\overline{\mathbf{K}}_{v}\|_{\mathrm{F}}^{2}+\lambda\sum_{v\in V}|\alpha _{v}|\bigg{\}}\] s.t. \[\alpha_{v}\geq 0\ (\forall v\in V),\] where \(\lambda\) denotes a regularization parameter and \(\overline{\mathbf{L}}=\mathbf{HLH}/\|\mathbf{HLH}\|_{\mathrm{F}}\in\mathbb{R} ^{M\times M}\), \(\overline{\mathbf{K}}_{v}=\mathbf{HK}_{i}\mathbf{H}/\|\mathbf{HK}_{v}\mathbf{ H}\|_{\mathrm{F}}\in\mathbb{R}^{M\times M}\) are the normalized Gram matrices of perturbed samples. Figure 1 shows an example of building a gram matrix for target node \(v\). Focusing on the significance of the existence of nodes for \(\Phi\), \(\mathbf{K}_{v}\) is calculated with the delta kernel from binary features, to determine the existing nodes in \(\hat{G}_{i}\). Then, focusing on the significance of the continuous node features for \(\Phi\), \(\mathbf{K}_{v}\) is calculated using a gaussian kernel for the node features in \(\hat{G}_{i}\). Table 1 lists the variations in perturbation schemes and kernel functions; \(\mathbf{L}\) is calculated using predictive probability vectors (i.e., the output of \(\Phi\)) and the gaussian kernel. Note that we can also detect important edges in \(G\) by replacing \(\mathbf{K}_{v}\) with a gram matrix \(\mathbf{K}_{e}\) associated with an edge \(e\in E\). Thus, the proposed method is versatile in terms of explaining various GNN tasks. **How to prepare an auxiliary dataset.** Two strategies of adding perturbations to the original graph are proposed: (1) **Random perturbation** and (2) **Walk-based perturbation**. Random perturbations remove nodes or edges **randomly** from the original graph or **randomly** add noise to the node features of the selected nodes. Note that if the nodes are removed from a graph, the edges connected to the nodes are also deleted. On the other hand, walk-based perturbations add these perturbations to nodes or edges **only on a random walk**. This means that walk-based perturbations make it possible to identify the functionality of the components in the original graphs. ### Group-regularized node explanation The L1 regularization method selects important node features independently. In this sec \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{Variations of perturbation schemes and kernel functions} \\ \hline Target & Perturbation & Features & Kernel \\ \hline The existence of certain nodes is important & Remove nodes & Binary features indicating whether a node exists & Delta kernel \\ The existence of certain edges is important & Remove edges & Binary features indicating whether the edge exists & Delta kernel \\ The features of certain nodes are important & Add noise & Continuous node features & Gaussian kernel \\ \hline \end{tabular} \end{table} Table 1: Examples of combinations of perturbation schemes and kernel functions. Various situations can be handled through appropriate selections. Figure 1: Calculating a gram matrix for a node and a prediction. The orange and blue nodes represent the perturbed node features. tion, we show how to incorporate structural information into GraphLIME. Group regularizations in lasso-like formulations enable the selection of input features in group units [10, 11]. In this study, group regularizations in HSIC lasso are used to identify crucial substructures of a graph for GNN prediction. HSIC lasso with group regularization for GNN explanations is formulated as follows: \[\min_{\boldsymbol{\alpha}\in\mathbb{R}^{|V|}} \bigg{\{}\frac{1}{2}\|\overline{\mathbf{L}}-\sum_{v\in V}\alpha_ {v}\overline{\mathbf{K}}_{v}\|_{\mathrm{F}}^{2}+\lambda\sum_{\pi\in\Pi}\|\gamma _{\pi}\|_{2}\bigg{\}},\] s.t. \[\alpha_{v}\geq 0\ (\forall v\in V),\] where \(\pi\subseteq V\) denotes an overlapping group composed of nodes and edges; \(\gamma_{\pi}\) denotes a vector composed of \(\alpha_{v}\) corresponding to a node \(v\in V\) contained in a group \(\pi\); and \(\lambda\geq 0\) denotes a regularization parameter. The edge version is obtained by replacing node \(v\) with edge \(e\). This objective can be solved in the same manner as latent group lasso [11]. The effects of group regularizations on the latent group lasso are discussed below. **The role of latent group regularization.** One of the crucial benefits of HSIC lasso in the context of feature selection is that interdependent features, in which the value of the HSIC is high, are less likely to be included in the selected features. This effect of latent group regularization in the context of GNN explanation prevents both overlapping groups from being selected as an explanation. **How to construct groups.** A walk-based strategy is also proposed to obtain groups for regularization. In walk-base perturbations, nodes or edges on a random walk are used as a group. In addition, purposeful grouping is designed manually using prior knowledge of the target graph. For example, if the target graph is a molecular graph, it is possible to break it down into components based on the chemical functionalities. ### Fused-regularized node explanation A large target graph makes it difficult and costly to construct meaningful groups, both with simple random walks and prior knowledge. To mitigate this problem, another type of structural regularization, the generalized fused lasso [12] can be used as follows: \[\min_{\boldsymbol{\alpha}\in\mathbb{R}^{|V|}}\bigg{\{}\frac{1}{2}\| \overline{\mathbf{L}}-\sum_{v\in V}\alpha_{v}\overline{\mathbf{K}}_{v}\|_{ \mathrm{F}}^{2}+\lambda\sum_{v\in V}|\alpha_{v}|\] \[+\mu\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! optimizing edge masks so that there is mutual information between the predictive distribution and the masked graph. PGExplainer [4] also obtains an edge-based explanation for a target graph by optimizing a neural network that estimates the edge mask. GNNExplainer and PGExplainer do not guarantee connectivity of the explanation, which might lead to non-understandable results. Furthermore, optimizations in edge masks work well when connections of nodes are truly important in GNN prediction, whereas the results tend to be incomprehensible when nodes are present. PGMExplainer [14] generates a probabilistic graphical model for GNN explanation with considering influences on the prediction when perturbing node features. SubgraphX[5] finds significant subgraphs for GNNs by using the Monte Carlo tree search algorithm and the Shapley values from game theory. Although SubgraphX always generates a connected graph as a human-friendly explanation, it cannot simultaneously detect separate important nodes. GraphSVX [15] computes fair contributions of node and its features with the Shapley Values and mask perturbations. XGNN [16] is different from other methods in that XGNN provides "model-level" explanations for GNNs. Model-level explanation means returning a general explanation for the behavior of the trained GNN that do not depend on each instance, while StGraphLIME and the methods that we presented above make "instance-level" explanations that aim to interpret the prediction of the trained GNN for a single instance. GraphLIME [6] is the most similar to our method. GraphLIME selects significant "dimensions of node feature" using HSIC lasso [8, 9] for a local interpretation, while the proposed method "StGraphLIME" is capable of selecting "sub-structures of a graph" making the method an extension of GraphLIME. There are few works that explain GNNs designed especially for dynamic graphs. DGExplainer [17] calculates the contributions of subgraphs based on the LRP algorithm. Wenchong et al. proposed to extend PGMExplainer to dynamic graphs by applying it to each snapshot and finding out dominant Bayesian networks [18]. There works need to access the hidden states of a trained model, while our proposed StGraphLIME is a fully black-box explanation method that can be applied to dynamic graphs. ## 5 Experiment In this section, the power of the proposed method is evaluated using five types of experiments. These experiments include cases where nodes / edges / node features are important for regular, chemical graph, and graph series classifications. ### Setup **Baselines.** A random explainer (which places a random score on each node), GNNExplainer [3], PGExpaliner [4], and SubgraphX [5] were adopted as the baselines for graph classification. In the graph series classificaion setting, we used the random explainer and occlusion explainer, where each node score is the change in the predictive score when setting the feature on the node to 0. For a fair comparison, the edge score was converted to a node score, where the maximum value of the edge importance adjacent to the node is the node importance. The learning rate of GNNExplainer and PGExplainer was set to 0.005, and the number of training iterations was set to 1000. **Implementations.** In the experiments, the GNN models and GNNExplainer were implemented with PyG [19] and Pytorch Geometric Temporal [20] was used for temporal GNN models. As HSIC lasso solvers, scikit-learn [21], SPAMS [22, 23], and CVXPY [24, 25] were used, and the DIG library [26] was adapted for use in PGExplainer and SubgraphX. The implementation of HSIC lasso was based on the GraphLIME implementation1. The regularization parameters \(\lambda,\mu\) of the proposed method were determined using a grid search on the validation data. When multiple parameters had the same validation score, the parameter that returned the smallest graph was selected because the smaller the output is, the more easily humans understand. Footnote 1: [https://github.com/WilliamCCHuang/GraphLIME](https://github.com/WilliamCCHuang/GraphLIME) ### Case study **Case \(1.1\): One node is target.** In case \(1\), to confirm whether the proposed method can identify significant nodes, a trained GNN is used to classify the wheel and cycle graphs. Figure 2 (a) shows an example of a cycle graph and wheel graph. The key difference between these two types of graphs is the presence of a "hub" node. Explanation methods are expected to result in higher hub node scores than those of other nodes. Random perturbation was adopted by removing \(2\) nodes, the delta kernel, and binary features with the \(i\)-th dimension indicating whether the \(i\)-th node is in \(\hat{G}_{i}\) to compute \(\mathbf{K}_{v}\) for the HSIC lasso. The size of the auxiliary dataset was \(201\), and the original graph and predictive score were included. A GIN [27] with three layers and max/mean/add global pooling was used as the model to be explained. The entire dataset consisted of cycle graphs with \(5\sim 20\) nodes and wheel graphs with \(6\sim 21\) nodes, which was split into training and testing data at a ratio of \(9:1\). The trained GNNs classifed testing data with \(100\%\) accuracy. The accuracy of detecting the true nodes, was evaluated using Top-\(K\) Acc. which indicates whether the ground-truth node is among the top \(K\) highest-scoring nodes. The wheel graphs with \(6,7,8\) nodes were used as validation data, and the score was calculated from the rest. The grid search results are for \(\lambda=10^{-6}\) in StGraphLIME (L1), \(\lambda=10^{-2}\) in StGraphLIME (Group), and \(\lambda=1.0,\mu=1.0\) in StGraphLIME (Fused). The means and standard deviations of the results were recorded for the three types of pooling layers. Table 2 demonstrates that our method and SubgraphX can detect the hub nodes more accurately than the other methods. **Case \(1.2\): Two nodes are targets.** In this case, the explanation methods aim to detect the two important disconnected nodes. The synthetic dataset is composed of three types of graphs with binary labels. Figure 2(b)-(c) shows these \(3\) patterns, two-connected-wheel graphs, two-connected-cycle graphs, and mixed graphs. The key difference between them is the presence of hub nodes in the cycle graphs on both sides. These graphs have labels according to whether there are hub nodes in each of them, which allows GNN to classify based on the existence of two hub nodes. Specifically, two-connected-wheel graphs and mixed graphs have positive labels, and two-connected-cycle graphs have negative labels. The explanation methods were evaluated using two metrics (AAcc. and OAcc.) for the two-connected-wheel graphs; AAcc. indicates that the explanation method can identify both hub nodes, and OAcc. indicates that an explanation method can detect at least one of the two hub nodes. Random perturbation was adopted by removing \(4\) nodes, the delta kernel, and binary features; the \(i\)-th dimension indicates whether the \(i\)-th node is in \(\hat{G}_{i}\) to compute \(\mathbf{K}_{v}\) for the HSIC lasso. The size of the auxiliary dataset was \(201\), and the original graph and predictive score were included. GIN [27] with three layers and max/mean global pooling was used as the model to be explained. The reason for not adding pooling was because the GIN with added global pooling performed poorly. The number of nodes in a cycle graph, in a two-connected-cycle graph, and a mixed graph varied from 5 to 15, and the number of nodes in a wheel graph, in a two-connected-wheel graph, and a mixed graph varied from 6 to 16. The entire dataset was split into training and testing data at a ratio of \(9:1\) and the trained GNNs classified the testing data with \(100\%\) accuracy. To evaluate the accuracy of detecting the true nodes, Top-\(K\) Acc. was used. The two-connected-wheel graph with \(7,8\) nodes was used as validation data, and the score was calculated from the rest. The grid search results for \(\lambda=10^{-8}\) in StGraphLIME (L1), \(\lambda=10^{-7}\) in StGraphLIME (Group), and \(\lambda=1.0,\mu=10^{-7}\) in StGraphLIME (Fused) were calculated. The mean and standard deviation of the results were recorded for two runs each for two types of pooling layers. Table 2 demonstrates that our methods could detect the two important hub nodes significantly better than the baselines. Although SubgraphX displayed a higher ability in identifying either hub node than other methods, the ability to identify both was inferior to our method and equivalent to GNNExplainer and PGExplainer. This is because SubgraphX always outputs a connected graph and is difficult to detect with a small number of candidates when the critical nodes are isolated. Case \(2\): One edge is target.For case 2, to confirm whether explanation methods can identify significant edges for prediction, a trained GNN was used to classify two-disconnected-cycle graphs and glass-like graphs. Figures 2(e) and (f) show an example of two-disconnected-circle graphs and a \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{Case 1.1} & \multicolumn{4}{c|}{Case 1.2} \\ \hline Methods & Top 1 Acc. & Top 3 Acc. & Top 2 AAcc. & Top 6 AAcc. & Top 2 OAcc. & Top 6 OAcc. \\ \hline StGraphLIME (L1) & 0.98(\(\pm\)0.03) & 0.98(\(\pm\)0.03) & 0.81(\(\pm\)0.19) & 0.97(\(\pm\)0.05) & 0.92(\(\pm\)0.05) & 1.00(\(\pm\)0.00) \\ \hline StGraphLIME (Group) & 1.00(\(\pm\)0.00) & 1.00(\(\pm\)0.00) & 0.83(\(\pm\)0.05) & 0.88(\(\pm\)0.00) & 0.97(\(\pm\)0.05) & 1.00(\(\pm\)0.00) \\ \hline StGraphLIME (Fused) & 1.00(\(\pm\)0.00) & 1.00(\(\pm\)0.00) & 0.75(\(\pm\)0.09) & 0.94(\(\pm\)0.05) & 0.92(\(\pm\)0.06) & 0.97(\(\pm\)0.00) \\ \hline SubgraphX & 1.00(\(\pm\)0.00) & 1.00(\(\pm\)0.00) & 0.00(\(\pm\)0.00) & 0.56(\(\pm\)0.23) & 0.94(\(\pm\)0.03) & 1.00(\(\pm\)0.00) \\ \hline GNNExplainer & 0.02(\(\pm\)0.06) & 0.45(\(\pm\)0.13) & 0.00(\(\pm\)0.00) & 0.11(\(\pm\)0.10) & 0.50(\(\pm\)0.29) & 0.94(\(\pm\)0.05) \\ \hline PGExplainer & 0.00(\(\pm\)0.00) & 0.00(\(\pm\)0.00) & 0.00(\(\pm\)0.00) & 0.53(\(\pm\)0.16) & 0.56(\(\pm\)0.24) & 0.58(\(\pm\)0.19) \\ \hline Random & 0.02(\(\pm\)0.03) & 0.24(\(\pm\)0.03) & 0.00(\(\pm\)0.00) & 0.06(\(\pm\)0.05) & 0.11(\(\pm\)0.09) & 0.42(\(\pm\)0.10) \\ \hline \end{tabular} \end{table} Table 2: Results of the experiment for Case 1.1 and 1.2. Figure 2: An illustration of graphs. glass-like graph. The key difference between these graphs is the presence of a "bridge" edge. Explanation methods are expected to result in higher scores on the bridge edge than on other nodes. A random explainer was adopted to remove 3 edges, the delta kernel, and binary features with the \(i\)-th dimension indicating whether the \(i\)-th edge is in \(\hat{G}_{i}\) to compute \(\mathbf{K}_{e}\) for the HSIC lasso. The size of the auxiliary dataset was 201, and the original graph and predictive score were included. A three-layer GIN [27] with max global pooling was used as the model to be explained. The reason for not using additional pooling was because the GIN with added/mean global pooling did not show sufficient performance. The number of nodes in a cycle graph in a two-disconnected cycle graph and glass-like graph varied from 3 to 10. The entire dataset was split into training and testing data at a ratio of \(9:1\) and the trained GNNs classified the testing data with \(100\%\) accuracy. To evaluate the accuracy of detecting true nodes, Precision@4 was used. Glass-like graphs with \(3,4\) nodes were used as validation data, and the score was calculated from the rest. The grid search results for \(\lambda=10^{-7}\) in StGraphLIME (L1), \(\lambda=10^{-2}\) in StGraphLIME (Group), and \(\lambda=1.0,\mu=1.0\) in StGraphLIME (Fused) were calculated. The mean and standard deviation of the results of the three runs were recorded. Table 3 demonstrates that the proposed methods could detect the bridge better than baselines other than SubgraphX in Top-5 Acc. **Case \(3\): Node features are target.** For Case 2, the proposed methods were evaluated in a pattern recognition task on \(4\times 4\) grid graphs. The four types of node feature patterns prepared were: none, rectangle, line, and rectangle-line, which is a combination of rectangles and lines. The four types of grid graphs are displayed in Figure 3. It is assumed that the important nodes for prediction are those with 1 features. Rectangles and lines are targets for GNN explanation, and none and rectangle lines are used only in the training of leading nodes with 1 features for prediction. The random perturbation was adopted in the proposed methods by adding noise, a gaussian kernel, and perturbed node features. The size of the auxiliary dataset was 201, and the original graph and predictive score were included. GIN [27] with three layers and max/mean/add global pooling was used as the model to be explained. The size of the auxiliary dataset was 200. The entire dataset was split into training and testing data at a ratio of \(9:1\) and the trained GNNs classified the testing data with \(100\%\) accuracy. Two rectangles and lines were used as validation data and the score was calculated from the rest. The grid search results in \(\lambda=10^{-8}\) in StGraphLIME (L1), \(\lambda=10^{-2}\) in StGraphLIME (Group), and \(\lambda=1.0,\mu=1.0\) in StGraphLIME (Fused) were calculated. The mean and standard deviation of the results were recorded for the three types of pooling layers. Table 3 demonstrates that the proposed methods can detect bridges more accurately than the other methods. **Case \(4\): Chemical graph classification.** The MUTAG dataset[28] is a chemical graph dataset for graph classification and is often used as a benchmark for GNN explanation tasks. The quantitative evaluation of GNN explanation methods is difficult for real-world graphs because the ground-truth explanations \begin{table} \begin{tabular}{|c|c|c|c|} \hline & \multicolumn{2}{c|}{Case 2} & \multicolumn{2}{c|}{Case 3} \\ \hline Methods & Top 2 Acc. & Top 5 Acc. & Precision @4 \\ \hline StGraphLIME (L1) & 1.00(\(\pm\)0.00) & 1.00(\(\pm\)0.00) & 1.00(\(\pm\)0.00) \\ \hline StGraphLIME (Group) & 1.00(\(\pm\)0.00) & 1.00(\(\pm\)0.00) & 0.98(\(\pm\)0.02) \\ \hline StGraphLIME (Fused) & 1.00(\(\pm\)0.00) & 1.00(\(\pm\)0.00) & 1.00(\(\pm\)0.00) \\ \hline SubgraphX & 0.39(\(\pm\)0.16) & 1.00(\(\pm\)0.00) & 0.96(\(\pm\)0.03) \\ \hline GNNExplainer & 0.00(\(\pm\)0.00) & 0.11(\(\pm\)0.08) & 0.59(\(\pm\)0.18) \\ \hline PGExplainer & 0.44(\(\pm\)0.21) & 0.44(\(\pm\)0.21) & 0.31(\(\pm\)0.09) \\ \hline Random & 0.00(\(\pm\)0.00) & 0.11(\(\pm\)0.16) & 0.24(\(\pm\)0.06) \\ \hline \end{tabular} \end{table} Table 3: Results of the experiment for Case 2 and 3. cannot be determined. Following [5], the sparsity and fidelity scores were adapted to make quantitative evaluations. The fidelity for a set of pairs of graphs and an important node set \(\{(G^{i},M^{i})\}_{i=1}^{N}\) is defined as follows: \[\text{Fidelity}=\frac{1}{N}\sum_{i=1}^{N}\Phi(G^{i})^{*}-\Phi(\hat{G^{i}}_{M^{ i}})^{*}\] where \(\Phi(G^{i})^{*}\) denotes the prediction score of \(\Phi\) for the \(i\)-th graph \(G^{i}\), and \(\hat{G^{i}}_{M^{i}}\) denotes a graph originating from \(G^{i}\) with the feature of nodes in \(M_{i}\) set to \(0\). Fidelity is an indicator of the extent of worsening prediction when the node features are changed. The sparsity, for a set of pairs of graphs and an important node set \(\{(G^{i},M^{i})\}_{i=1}^{N}\), is defined as follows: \[\text{Sparsity}=\frac{1}{N}\sum_{i=1}^{N}1-\frac{|M^{i}|}{|G^{i}|},\] where \(|G^{i}|\) denotes the number of nodes in \(G^{i}\), Sparsity is an indicator of the ratio of the size of the selected important nodes \(M^{i}\) to that of \(G^{i}\). In general, fidelity and sparsity are trade-offs. The lower sparsity tends to increase the fidelity because more noise is added to the original graph and the prediction tends to worsen. This means that a comparison of the fidelity under the same sparsity values is necessary for a fair evaluation. On the other hand, because it is difficult to completely control sparsity, the sparsity-fidelity plot was evaluated according to Yuan et al. [5]. The average fidelity and sparsity were calculated in detail with the maximum number of important nodes to select, set to \(10\%,20\%,30\%,40\%\), thereby plotting them as a sparsity-fidelity graph. Our methods, GN-NExplainer, and PGEExplainer selected up to the maximum number of specified nodes with positive importance scores in descending order of importance score. For SubgraphX, the maximum number of nodes in subgraph was set to the output of the aforementioned values. Random-walk perturbation was adopted with the addition of noise, delta kernel, and categorical features with the \(i\)-th dimension indicating the \(i\)-th node feature to compute \(\mathbf{K}_{v}\) for HSIC lasso. The size of the auxiliary dataset was 201, and the original graph and predictive scores were included. GIN [27] with three layers and max/mean global pooling was used as the model to be explained. Pooling was not added because the GIN with added global pooling performed poorly. The entire dataset was split into training and testing data at a ratio of \(9:1\), and the trained GNNs classified the testing data with an accuracy of \(89\%\). A total of 10 samples were used as validation data, and the score was calculated from the rest. The grid search results for \(\lambda=10^{-9}\) in StGraphLIME (L1), \(\lambda=10^{-5}\) in StGraphLIME (Group), and \(\lambda=10^{-9},\mu=10^{-2}\) in StGraphLIME (Fused) were calculated. Table 4 lists the sparsity-fidelity plots for trained GNNs with max/mean pooling. The proposed method outperformed GN-NExplainer and PGEExplainer, whereas SubgraphX displayed the best performance. **Case \(5\): Graph series classification.** The case \(5\), was to confirm that the proposed method could identify nodes having significant features in sequential graph classification. These data comprise graph series of length \(3\), where the structure of the graphs is fixed at all times for all series. The structure is de Figure 3: An illustration of none (left), rectangle (center left), line (center right), rectangle-line (right). Yellow nodes are nodes with 1 features and purple nodes indicate nodes with 0 features. termined using the Barabasi-Albert model [29] with 20 nodes. The number of series was 200 and each node had a scalar feature of 0 or 1, which is considered a concrete feature. If the features of the chunks of nodes at a certain time are 1, the sample has a positive label. Otherwise, the label is negative. In other words, the key difference between positive and negative label graphs is whether there are node features on the chunk at a certain time in the graph series. Figure 5 shows an example of a graph series and its labels. Our method is expected to result in higher chunk scores than those of other nodes. The delta kernels were used and the node features were treated as labels to compute \(\mathbf{K}_{v,t}\). Random-walk perturbation was adopted in the proposed method by adding noise, a gaussian kernel, and perturbed node features. The size of the auxiliary dataset was 251, and the original graph and predictive scores were included. An 1-layer T-GCN [30] with max/mean/add global pooling was used as the model to be explained. The entire dataset was split into training and testing data at a ratio of \(9:1\) and the trained GNNs were classified as the testing data with 100% accuracy. Another dataset was generated as validation data, and the score was calculated from the entire original dataset. The grid search results for \(\lambda=10^{-6}\) in HSIC lasso, \(\lambda=10^{-5}\) in HSIC group lasso, and \(\lambda=1.0,\mu=10^{-1}\) in HSIC fused lasso were calculated. The means and standard deviations of the results were recorded for the three types of pooling layers. Table 4 demonstrates that our method can detect anomaly node groups more accurately than the other methods. Figure 4: Quantitative results of the experiment for Case 4. Figure 5: An illustration of graph series classification. The yellow nodes surrounded by the red line indicate that the graph at the bottom has a positive label. ## 6 Discussion Strengths and weaknesses.The proposed methods work well in finding partial structural and feature differences compared with existing methods. This is because the existing methods measure the changes when perturbations are added to the original graph. On the other hand, the proposed methods work poorly for graphs without any "motif" which is the basis for GNN prediction. This issue is not only related to the proposed methods, but many other methods have the same problem. Computational costs.The space complexity of the HSIC lasso is \(O(n^{2}P)\), where \(n\) denotes the number of nodes or edges in the target graph, and \(P\) denotes the size of the auxiliary dataset. This cost can be reduced to \(O(nBP)\) using block HSIC lasso [9], where \(B\ll n\) denotes the block size. The power of detecting significant structures in the prediction strongly depends on \(P\). More \(n\) and less \(P\) cause false positive results because of selection bias and decrease in the power of detecting significant structures due to lack of sufficient perturbation. For example, if a GNN classifies graphs based on a certain node set, to completely remove selection bias and maximize detection power, \(P=O(2^{n})\). If "the size of the target node is set to at most \(K\)", then \(P={}_{n}C_{K}2^{K}=O(n^{K}2^{K})\). If "the significant structure is a subgraph with \(K\) nodes", then \(P=(\#\)subgraph with \(K\) nodes). In the simplest case, if each node is perturbed at least once, then \(P=O(n)\), and the total space complexity is cubic in this case. Out-of-distribution problem.Structural perturbations in a graph based on the training data can generate out-of-distribution samples [31], causing misleading explanations. One way to mitigate this issue is to use feature perturbations instead of structural perturbations. In the chemical graph classification experiment, feature perturbation was adopted for this reason. Another possible approach is to calibrate the distribution used in HSIC lasso. Because our methods distributionally quantify the dependency between the graph structures and the prediction, correcting the distributions is a promising option. ## 7 Conclusion A flexible method for the structure-based explanation of GNNs was proposed based on HSIC lasso. Group regularizations allow feature selection in the substructure units. The proposed methods perform better in cases where the key structural difference is evident, whereas a larger graph size makes it difficult for the proposed methods to detect the key differences.
2307.06581
Deep Neural Networks for Semiparametric Frailty Models via H-likelihood
For prediction of clustered time-to-event data, we propose a new deep neural network based gamma frailty model (DNN-FM). An advantage of the proposed model is that the joint maximization of the new h-likelihood provides maximum likelihood estimators for fixed parameters and best unbiased predictors for random frailties. Thus, the proposed DNN-FM is trained by using a negative profiled h-likelihood as a loss function, constructed by profiling out the non-parametric baseline hazard. Experimental studies show that the proposed method enhances the prediction performance of the existing methods. A real data analysis shows that the inclusion of subject-specific frailties helps to improve prediction of the DNN based Cox model (DNN-Cox).
Hangbin Lee, IL DO HA, Youngjo Lee
2023-07-13T06:46:51Z
http://arxiv.org/abs/2307.06581v1
# Deep Neural Networks for Semiparametric Fraility Models via H-likelihood ###### Abstract For prediction of clustered time-to-event data, we propose a new deep neural network based gamma frailty model (DNN-FM). An advantage of the proposed model is that the joint maximization of the new h-likelihood provides maximum likelihood estimators for fixed parameters and best unbiased predictors for random frailties. Thus, the proposed DNN-FM is trained by using a negative profiled h-likelihood as a loss function, constructed by profiling out the non-parametric baseline hazard. Experimental studies show that the proposed method enhances the prediction performance of the existing methods. A real data analysis shows that the inclusion of subject-specific frailties helps to improve prediction of the DNN based Cox model (DNN-Cox). **Keywords:** Deep neural network, Frailty model, H-likelihood, Prediction, Random effect. ## 1 Introduction Recently, deep neural network (DNN) has provided a major breakthrough to enhance prediction in various areas (LeCun et al., 2015; Goodfellow, 2016). The DNN models allow extensions of Cox proportional hazards (PH) models (Kvamme et al., 2019; Sun et al.,2020). Recently, subject-specific prediction of the DNN models has been studied by including random effects in neural network (NN) predictor (Tran et al., 2020; Mandel et al., 2022). However, these DNN random-effect models have been studied for only complete data. In this paper we propose a new DNN-FM. To the best of our knowledge, there is no literature on the DNN-FM for censored survival data. Lee and Nelder (1996) introduced the h-likelihood for the inference of general models with random effects and Ha, Lee and Song (2001) extended it to the semi-parametric frailty models. We reformulate the h-likelihood to obtain maximum likelihood estimators (MLEs) for fixed unknown parameters and best unbiased predictors (BUPs; Searle et al., 1992; Lee et al., 2017) for random frailties by a simple joint maximization of the profiled h-likelihood, which is constructed by profiling out
2310.13225
Scalable Neural Network Kernels
We introduce the concept of scalable neural network kernels (SNNKs), the replacements of regular feedforward layers (FFLs), capable of approximating the latter, but with favorable computational properties. SNNKs effectively disentangle the inputs from the parameters of the neural network in the FFL, only to connect them in the final computation via the dot-product kernel. They are also strictly more expressive, as allowing to model complicated relationships beyond the functions of the dot-products of parameter-input vectors. We also introduce the neural network bundling process that applies SNNKs to compactify deep neural network architectures, resulting in additional compression gains. In its extreme version, it leads to the fully bundled network whose optimal parameters can be expressed via explicit formulae for several loss functions (e.g. mean squared error), opening a possibility to bypass backpropagation. As a by-product of our analysis, we introduce the mechanism of the universal random features (or URFs), applied to instantiate several SNNK variants, and interesting on its own in the context of scalable kernel methods. We provide rigorous theoretical analysis of all these concepts as well as an extensive empirical evaluation, ranging from point-wise kernel estimation to Transformers' fine-tuning with novel adapter layers inspired by SNNKs. Our mechanism provides up to 5x reduction in the number of trainable parameters, while maintaining competitive accuracy.
Arijit Sehanobish, Krzysztof Choromanski, Yunfan Zhao, Avinava Dubey, Valerii Likhosherstov
2023-10-20T02:12:56Z
http://arxiv.org/abs/2310.13225v2
# Scalable Neural Network Kernels ###### Abstract We introduce the concept of _scalable neural network kernels_ (SNNKs), the replacements of regular _feedforward layers_ (FFLs), capable of approximating the latter, but with favorable computational properties. SNNKs effectively disentangle the inputs from the parameters of the neural network in the FFL, only to connect them in the final computation via the dot-product kernel. They are also strictly more expressive, as allowing to model complicated relationships beyond the functions of the dot-products of parameter-input vectors. We also introduce the _neural network bundling process_ that applies SNNKs to compactify deep neural network architectures, resulting in additional compression gains. In its extreme version, it leads to the fully bundled network whose optimal parameters can be expressed via explicit formulae for several loss functions (e.g. mean squared error), opening a possibility to bypass backpropagation. As a by-product of our analysis, we introduce the mechanism of the _universal random features_ (or URFs), applied to instantiate several SNNK variants, and interesting on its own in the context of scalable kernel methods. We provide rigorous theoretical analysis of all these concepts as well as an extensive empirical evaluation, ranging from point-wise kernel estimation to Transformers' fine-tuning with novel adapter layers inspired by SNNKs. Our mechanism provides up to 5x reduction in the number of trainable parameters, while maintaining competitive accuracy. ## 1 Introduction Consider a kernel function: \(\mathrm{K}:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}\), taking as input two feature vectors encoding latent embeddings of their corresponding objects and returning their similarity. Kernel methods are among the most theoretically principled approaches to statistical machine learning (ML) and have proven effective in numerous real-world problems (Scholkopf & Smola, 2002; Kontorovich et al., 2008). Despite their theoretical guarantees and applicability in a rich spectrum of ML settings, the main drawback of these techniques is a high computational complexity, at least quadratic in the size \(N\) of the training dataset. For example, the kernel regression has time complexity \(\mathcal{O}(N^{3})\). To address this issue, Rahimi & Recht (2007) proposed to construct a random feature (RF) map \(\Phi:\mathbb{R}^{d}\to\mathbb{R}^{m}\) that transforms an input point \(\mathbf{z}\) to a finite-dimensional feature vector \(\Phi(\mathbf{z})\in\mathbb{R}^{m}\) such that: \(\mathrm{K}(\mathbf{x},\mathbf{y})=\mathbb{E}[\Phi(\mathbf{x})^{\top}\Phi( \mathbf{y})]\) (effectively approximately linearizing kernel function). Approximating general kernels \(\mathrm{K}(\mathbf{x},\mathbf{y})\) via linear (dot-product) kernels \(\mathrm{K}(\mathbf{x},\mathbf{y})\approx\widehat{\mathbf{x}}^{\top}\widehat{ \mathbf{y}}\) for \(\widehat{\mathbf{z}}=\Phi(\mathbf{z})\) drastically changes computational complexity landscape, which is now dominated by the number \(m\) of random features, thus providing computational gains if \(m\ll N\). Since their seminal work, there has been a variety of works proposing random features for a broad range of kernels like Gaussian, Matern (Choromanski et al., 2018) and polynomial (Kar & Karnick, 2012; Wacker et al., 2022; 1). In the meantime, with the advances in: the optimization algorithms for deep ML architectures and the accelerators' hardware, neural network (NN) models (Goodfellow et al., 2016; Schmidhuber, 2014; LeCun et al., 2015) have become predominant in machine learning. The _feedforward layer (FFL) is the core computational module of NNs and is of the following form: \[\mathbf{x}\to f(\mathbf{W}\mathbf{x}+\mathbf{b}) \tag{1}\] for \(\mathbf{x}\in\mathbb{R}^{d},\mathbf{W}\in\mathbb{R}^{l\times d},\mathbf{b}\in \mathbb{R}^{l}\) (_bias_) and an _activation function_\(f:\mathbb{R}\to\mathbb{R}\) (applied element-wise). The expressiveness of deep NNs, far surpassing standard kernel methods, comes from stacking together several FFLs, each encoding **non-linear** mapping with **learnable \(\mathbf{W},\mathbf{b}\)**. In this work, we draw a deep connection between scalable kernel methods and neural networks. We reinterpret the FFL as outputting the expected vector of dot-products of: **(1)** the latent embeddings of the input \(\mathbf{x}\) and **(2)** the parameters: \(\mathbf{W}\), b of the FFL, effectively disentangling input from model's parameters in the computational graph, only to connect them in the final computation via the dot-product kernel. To be more specific, we think about the FFL as the following transformation: \[\begin{cases}\overline{\mathrm{K}}_{f}(\mathbf{x},(\mathbf{W},\mathbf{b})) \stackrel{{\mathrm{def}}}{{=}}\left(\mathrm{K}_{f}(\mathbf{x},( \mathbf{w}^{0},b_{0})),...,\mathrm{K}_{f}(\mathbf{x},(\mathbf{w}^{l-1},b_{l- 1}))\right)^{\top},\\ \mathrm{K}_{f}(\mathbf{x},(\mathbf{w},b))\stackrel{{\mathrm{def} }}{{=}}\mathbb{E}[\Phi_{f}(\mathbf{x})^{\top}\Psi_{f}(\mathbf{w},b)],\end{cases} \tag{2}\] where mappings: \(\Phi_{f}:\mathbb{R}^{d}\to\mathbb{R}^{m},\Psi_{f}:\mathbb{R}^{d}\times\mathbb{ R}\to\mathbb{R}^{m}\) satisfy: \(f(\mathbf{w}^{\top}\mathbf{x}+b)=\mathbb{E}[\Phi_{f}(\mathbf{x})^{\top}\Psi_{f }(\mathbf{w},b)]\) and \(\mathbf{w}^{0},...\mathbf{w}^{l-1}\) are the transposed rows of \(\mathbf{W}\). Then, in the instantiation of the layer the expectations are dropped out. Rewriting an FFL in terms of two towers: one corresponding to the input and one to its learnable parameters has several advantages: 1. **network compression:** in the above formulation, instead of transforming layer parameters with \(\Psi_{f}\), one can directly learn vectors \(\Psi_{f}(\mathbf{w}^{i},b_{i})\) for \(i=0,...,l-1\). Then the number of trainable parameters becomes \(O(lm)\) rather than \(O(ld)\) and for \(m\ll d\) the layer effectively has a reduced number of parameters. 2. **computational savings:** if RFs can be constructed in time \(o(dl)\) per point and \(m\ll d\), the overall time complexity \(o(dl)\) of the FFL (given pre-computed embeddings \(\Psi_{f}(\mathbf{w}^{i},b_{i})\)) is **sub-quadratic** in layers' dimensionalities, 3. **deep NN bundling process:** a two-tower representation can be used iteratively to compactify multiple FFLs of NNs, the process we refer to as _neural network bundling_ (Sec. 3.3); this also leads to the computational gains. 4. **deep NNs as scalable kernels:** the extreme version of the bundling procedure, involving all the layers, provides a two-tower factorization of the entire deep NN with several potential practical and theoretical implications (Sec. 3.3). In particular, it leads to an explicit formula for the optimal parameters of the fully-bundled network under several loss objectives (e.g. mean squared loss), opening a possibility to bypass backpropagation. Figure 1: Pictorial representation of different NN layers discussed in the paper. Pink arrays represent NN weight matrices and grey ones, Gaussian projections matrices applied in SNNKs. Nonlinear transformations applied in mappings \(\Phi\) and \(\Psi\) are symbolically represented as functions \(g\) and \(h\) respectively. **Upper left:** Regular FFL with activation \(f\). **Upper right:** SNNK applied to a single FFL. **Bottom:** Bundling process using SNNKs and applied two a deep neural network module. In order to find mappings: \(\Phi_{f}\), \(\Psi_{f}\) from Eq. 2, we develop a new bounded random feature map mechanism, called _universal random features_ (or URFs) that leads to the unbiased estimation of \(f(\mathbf{w}^{\top}\mathbf{x}+b)\) as long as \(f\) has a well-defined Fourier Transform (FT), either in the classical Riemannian or distributional sense. To derive URFs, we combine Fourier analysis techniques with recent methods for softmax-kernel estimation from Likhosherstov et al. (2022). **Note:** We do not put any additional assumptions regarding \(f\), in particular \(f\) is not required to be differentiable. Furthermore, function \(\mathrm{K}_{f}\)**does not need** to be positive semi-definite. This is critical for applications in neural networks, where the activation function \(f\) usually does not correspond to a positive semi-definite kernel. To summarize, our main contributions in this paper are as follows: * We introduce the _scalable neural network kernel_ module (SNNK) as a replacement of a traditional FFL (Sec. 3), providing the disentanglement of the network's input and its parameter-set before final dot-product computation, as given in Eq. 2 (see also: Fig. 1). * We accompany SNNKs with our universal random features mechanism (URFs) to efficiently: (1) construct mappings \(\Phi_{f}\) and \(\Psi_{f}\) from Eq. 2 and consequently: (2) implement SNNKs (Sec. 3.1). We provide explicit formulae for URFs for trigonometric maps. Those produce SNNK-based replacements of the SIREN networks from Sitzmann et al. (2020). * We propose new NN-layers corresponding to the specific SNNK instantiation, called \(\mathrm{ReLU}\)-\(\mathrm{SNNK}\) (Sec. 3.2), that we found particularly effective in downstream applications (see: Sec. 4.3.2). We show that they are related to the class of the _arc-cosine kernels_Cho & Saul (2011). We also demonstrate using them that SNNKs are **strictly more expressive** than regular FFLs, as allowing to compute the functions of the inputs and parameters that cannot be defined as point-wise transformed vectors of their dot-products. * We introduce the neural network compactification process, that we refer to as _neural network bundling_, leveraging SNNKs (see: Sec. 3.3 and Fig. 1). * We provide an exhaustive empirical evaluation of SNNKs, from point-wise kernel estimation to the adapter-based Transformers' fine-tuning, providing about 5x reduction of the number of trainable parameters (Sec. 4). ## 2 Related Work The literature on random features is vast, yet most of the works focus on approximating positive definite kernels. The results on dimensionality reduction and the so-called _Johnson-Lindenstrauss Transform_ (or JLT) (Dasgupta & Gupta, 2003; Dasgupta et al., 2010; Ailon & Liberty, 2013) for the dot-product kernel marked the birth of the subject as as an archetype mechanism that Rahimi & Recht (2007) extended from linear to non-linear shift-invariant kernels. A substantial effort was made to further improve the accuracy of RF-methods by entangling projections used to construct RFs (Choromanski et al., 2017; Yu et al., 2016; Choromanski et al., 2018; Rowland et al., 2018). For certain classes of functions \(f\), RF-mechanisms leading to the linearization of \(\mathrm{K}_{f}\) have been already developed. In addition to the rich recent literature on the approximation techniques for the softmax-kernel \(\mathrm{K}_{\mathrm{exp}}(\mathbf{x},\mathbf{y})=\exp(\mathbf{x}^{\top} \mathbf{y})\)(Likhosherstov et al., 2022; 2023; Choromanski et al., 2021), algorithms for analytic \(f\) with positive coefficients of their Taylor series expansion were given (Kar & Karnick, 2012). Other RF-methods assume that kernel inputs are taken from the unit-sphere (Scetbon & Harchaoui, 2021; Han et al., 2022). Both assumptions are unrealistic for the neural network applications as far as inputs \(\mathbf{x}\) are concerned (interestingly, the latter one would be however more justifiable for the parameter-tower as long as bounded-norm weight matrices are considered, e.g. _orthogonal neural networks_(Helfrich et al., 2018). We would like to emphasize that our two-tower mechanism, effectively leading to the linearization of the FFLs from Eq. 2, can in principle work with various RF-algorithms, and not only our proposed URFs. The kernels applied in connection to neural networks have been widely studied (Bengio & Lecun, 2007). Such kernels are generally constructed using dot-products of outputs of the shallow neural networks with various non-linearities like ReLU (Cho & Saul, 2009; Bresler & Nagaraj, 2020) and tanh (Williams, 1996) or the gradients of the network like the NTK kernel (Jacot et al., 2020). Most of the work in linearizing NNs via kernels have been done in the case of a \(2\)-layer network where \[J(\mathbf{x};\mathbf{\theta})=\sum_{i=1}^{N}a_{i}f(\mathbf{x}^{\top}\mathbf{w}^{i}), \ \mathbf{\theta}=(a_{1},...,a_{N};\mathbf{w}^{1},...,\mathbf{w}^{N})\in\mathbb{R}^{N(d+1)} \tag{3}\] It is assumed that \(\mathbf{w}^{i}\) and \(f\) (non-linear activation) are fixed and scalars and \(a_{i}\) are trainable. Under various assumptions, one can write a compact linearized form of this neural network (Cho & Saul, 2009; 2011; Ghorbani et al., 2020). Moreover, in the above setting, \(J(\mathbf{x};\mathbf{\theta})\) corresponds to the first-order Taylor expansion of \(J\) with respect to the top-layer weights \(a_{i}\) which was first explored by (Neal, 1996). Even though our setting is fundamentally different, as our goal is to linearize single layers to disentangle the weights and the inputs, we build on the above intuition to create our SNNK-layers (see also: discussion in Appendix A). Note that NTK-based analysis, as leveraging Taylor-based linearization of the NN, is valid only for the mature stage of training/finetuning when weights do not change much and thus such a linearization is accurate (Malladi et al., 2022). SNNKs do not rely on this assumption. Furthermore, SNNKs can be used also in the context of non-positive definite (Ong et al., 2004) and asymmetric (He et al., 2023) kernels since mappings \(\Phi\) and \(\Psi\) in principle are different (on expectation they can produce both symmetric and asymmetric functions). Arc-cosine kernels were studied in the context of deep NNs before (Cho & Saul, 2009). However, in (Cho & Saul, 2009), the weights are still entangled with the FFL-input, as the initial latent representations of the inputs (for random parameters) are interpreted as RFs for the arc-cosine kernel. ## 3 Scalable Neural Network Kernels (SNNKs) The scalable neural network kernel (SNNK) computational module is defined as follows: \[\begin{cases}\overline{\mathrm{SNNK}}_{f}(\mathbf{x},(\mathbf{W},\mathbf{b}) )\stackrel{{\mathrm{def}}}{{=}}\left(\mathrm{SNNK}_{f}(\mathbf{x },(\mathbf{w}^{0},b_{0})),...,\mathrm{SNNK}_{f}(\mathbf{x},(\mathbf{w}^{l-1}, b_{l-1}))\right)^{\top},\\ \mathrm{SNNK}_{f}(\mathbf{x},(\mathbf{w},b))\stackrel{{\mathrm{ def}}}{{=}}\Phi_{f}(\mathbf{x})^{\top}\Psi_{f}(\mathbf{w},b),\end{cases} \tag{4}\] for, \(\mathbf{x}\in\mathbb{R}^{d}\), some mappings: \(\Phi_{f}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{m}\), \(\Psi_{f}:\mathbb{R}^{d}\times\mathbb{R}\rightarrow\mathbb{R}^{m}\) and transposed rows of \(\mathbf{W}\in\mathbb{R}^{l\times d}\): \(\mathbf{w}^{0},...\mathbf{w}^{l-1}\). As we show in Sec. 3.1, functions \(\Phi_{f},\Psi_{f}\) can be constructed in such a way that the SSNK module approximates a particular FFL, i.e.: \(\overline{\mathrm{SNNK}}_{f}(\mathbf{x},(\mathbf{W},\mathbf{b}))\approx f( \mathbf{W}\mathbf{x}+\mathbf{b})\), but mechanisms that do not imitate known FFLs are also of interest (see: Sec. 3.2). **Time complexity:** If we denote by \(t_{m}(d)\) time complexity for constructing an embedding: \(\Phi_{f}(\mathbf{x})\), then time complexity for constructing \(\overline{\mathrm{SNNK}}_{f}(\mathbf{x},(\mathbf{W},\mathbf{b}))\) (given the pre-computed \(\Psi_{f}(\mathbf{w}^{i},b_{i})\) for \(i=0,...,l-1\)) is: \(T_{m,l}(d)=ml+t_{m}(d)\). In Sec. 3.1 we show an algorithms for constructing URFs in time \(t_{m}(d)=O(md)\) and thus computational gains are provided as compared to the regular FFL (with time complexity \(O(ld)\)) as long as \(m\ll\min(l,d)\). **FFL compression:** As already mentioned in Sec. 1, the key observation is that in the setting, where the layer is learned (and thus \(\mathbf{w}^{0},...,\mathbf{w}^{l-1}\) are learnable), mapping \(\Psi_{f}\)**does not even need to be applied**, since vectors \(\mathbf{\omega}^{\mathrm{def}}\equiv\Psi_{f}(\mathbf{w}^{j},b)\) for \(j=0,...,l-1\) can be interpreted as unstructured learnable vectors. Thus the number of trainable parameters of the SNNK layer is \(O(ml)\), instead of \(O(dl)\) and consequently, the FFL is effectively compressed if \(m\ll d\). ### Universal Random Features (URFs) In this section, we show how to construct embeddings \(\Phi_{f}(\mathbf{x})\) and \(\Psi_{f}(\mathbf{w},b)\). We denote by \(\mathrm{FT}_{f}\) the _Fourier Transform_ of \(f\), where \(i\in\mathbb{C}\) satisfies: \(i^{2}=-1\): \[\mathrm{FT}_{f}(\xi)=\int_{\mathbb{R}}f(z)\exp(-2\pi i\xi z)dz \tag{5}\] If the integral does not exist in the classical Riemannian sense, we use its distributional interpretation. We rewrite \(\mathrm{FT}_{f}\) as: \(\mathrm{FT}_{f}=\mathrm{FT}_{f}^{\mathrm{re},+}-\mathrm{FT}_{f}^{\mathrm{re},- }+i\mathrm{FT}_{f}^{\mathrm{im},+}-i\mathrm{FT}_{f}^{\mathrm{im},-}\), where: \(\mathrm{FT}_{f}^{\mathrm{re},+},\mathrm{FT}_{f}^{\mathrm{re},-},\mathrm{FT}_{ f}^{\mathrm{im},+},\mathrm{FT}_{f}^{\mathrm{im},-}:\mathbb{R}\rightarrow\mathbb{R}_{\geq 0}\). Without loss of generality, we will assume that all four functions are not identically zero. Let us denote by \(\overline{\mathcal{P}}_{0},\overline{\mathcal{P}}_{1},\overline{\mathcal{P}}_{2 },\overline{\mathcal{P}}_{3}\) some probabilistic distribution on \(\mathbb{R}\) (e.g. Gaussian) and by \(\overline{p}_{0},\overline{p}_{1},\overline{p}_{2},\overline{p}_{3}:\mathbb{R} \rightarrow\mathbb{R}_{\geq 0}\) their corresponding density functions. Furthermore, denote by \(\mathcal{P}_{0},\mathcal{P}_{1},\mathcal{P}_{2},\mathcal{P}_{3}\) probabilistic distributions of densities: \(p_{0},p_{1},p_{2},p_{3}:\mathbb{R}\rightarrow\mathbb{R}_{\geq 0}\) proportional to: \(\mathrm{FT}_{f}^{\mathrm{re,+}},\mathrm{FT}_{f}^{\mathrm{re,-}},\mathrm{FT}_{f}^ {\mathrm{im,+}},\mathrm{FT}_{f}^{\mathrm{im,-}}\) respectively. We can then write: \[\begin{split} f(z)=\int_{\mathbb{R}}\mathrm{FT}_{f}(\xi)\exp(2 \pi i\xi z)d\xi&=\sum_{j=0}^{3}c_{j}\int_{\mathbb{R}}\frac{p_{j}( \xi)}{\bar{p}_{j}(\xi)}\exp(2\pi i\xi z)\overline{p}_{j}(\xi)d\xi\\ &=\sum_{j=0}^{3}c_{j}\mathbb{E}_{\xi\sim\overline{p}_{j}}\left[ \frac{p_{j}(\xi)}{\bar{p}_{j}(\xi)}\exp(2\pi i\xi z)\right],\end{split} \tag{6}\] where: \(c_{0}=\int_{\mathbb{R}}\mathrm{FT}_{f}^{\mathrm{re,+}}(\tau)d\tau,c_{1}=-\int_ {\mathbb{R}}\mathrm{FT}_{f}^{\mathrm{re,-}}(\tau)d\tau,c_{2}=i\int_{\mathbb{R }}\mathrm{FT}_{f}^{\mathrm{im,+}}(\tau)d\tau\), and furthermore \(c_{3}=-i\int_{\mathbb{R}}\mathrm{FT}_{f}^{\mathrm{im,-}}(\tau)d\tau\). For \(\mathbf{x},\mathbf{w}\in\mathbb{R}^{d},b\in\mathbb{R}\), let us denote: \[\widehat{f}_{j}(\mathbf{x},\mathbf{w},b)=c_{j}\mathbb{E}_{\xi\sim\overline{p} _{j}}\left[\frac{p_{j}(\xi)}{\bar{p}_{j}(\xi)}\exp\left(2\pi i\xi(\mathbf{x}^{ \top}\mathbf{w}+b)\right)\right]=c_{j}\mathbb{E}_{\xi\sim\overline{\mathcal{P} _{j}}}[S_{j}(\xi,b)\exp(\widehat{\mathbf{x}}^{\top}(\xi)\widehat{\mathbf{w}}( \xi))] \tag{7}\] for \(S_{j}(\xi,b)=\frac{p_{j}(\xi)}{\bar{p}_{j}(\xi)}\exp(2\pi i\xi b)\), \(\widehat{\mathbf{x}}(\xi)=\rho(\xi)\mathbf{x}\), \(\widehat{\mathbf{w}}(\xi)=\eta(\xi)\mathbf{w}\), where \(\rho(\xi),\eta(\xi)\in\mathbb{C}\) satisfy: \(\rho(\xi)\eta(\xi)=2\pi i\xi\). Inside the expectation in Eq. 7, we recognize the softmax-kernel value \(\mathrm{K}_{\exp}(\widehat{\mathbf{x}}(\xi),\widehat{\mathbf{w}}(\xi))=\exp( \widehat{\mathbf{x}}^{\top}(\xi)\widehat{\mathbf{w}}(\xi))\). We thus disentangle \(\widehat{\mathbf{x}}(\xi)\) from \(\widehat{\mathbf{w}}\) there, by applying softmax-kernel linearization mechanism from Likhosherstov et al. (2022): \(\exp(\widehat{\mathbf{x}}^{\top}(\xi)\widehat{\mathbf{w}}(\xi))=\mathbb{E}_ {\mathbf{g}\sim\mathcal{N}(0,\mathbf{I}_{d})}[\Lambda_{\mathbf{g}}(\widehat{ \mathbf{x}})\Lambda_{\mathbf{g}}(\widehat{\mathbf{w}})]\), where \(\Lambda_{\mathbf{g}}:\mathbb{R}^{d}\rightarrow\mathbb{R}\) is defined as follows for \(A\leq 0\): \[\Lambda_{\mathbf{g}}(\mathbf{z})=(1-4A)^{\frac{d}{4}}\exp(A\|\mathbf{g}\|_{2}^ {2}+\sqrt{1-4A}\mathbf{g}^{\top}\mathbf{z}-\frac{\|\mathbf{z}\|_{2}^{2}}{2}) \tag{8}\] Thus \(\widehat{f}_{j}(\mathbf{x},\mathbf{w},b)=\mathbb{E}_{(\xi,\mathbf{g})\sim \overline{\mathcal{P}}_{j}\otimes\mathcal{N}(0,\mathbf{I}_{d})}[\Gamma_{ \mathbf{g},\xi}^{1}(\mathbf{x})\Gamma_{\mathbf{g},\xi}^{2}(\mathbf{w},b)]\) for \(\Gamma_{\mathbf{g},\xi}^{1}(\mathbf{x}),\Gamma_{\mathbf{g},\xi}^{2}(\mathbf{ w},b)\) given as: \[\Gamma_{\mathbf{g},\xi}^{1}(\mathbf{x})=\Lambda_{\mathbf{g}}(\rho(\xi)\mathbf{ x}),\ \Gamma_{\mathbf{g},\xi}^{2}(\mathbf{w},b)=c_{j}S_{j}(\xi,b)\Lambda_{\mathbf{g}}(\eta( \xi)\mathbf{w}) \tag{9}\] That observation directly leads to the RF mechanism for the estimation of \(\widehat{f}_{j}(\mathbf{x},\mathbf{w},b)\). We can rewrite: \(\widehat{f}_{j}(\mathbf{x},\mathbf{w},b)=\mathbb{E}[\Phi^{j}(\mathbf{x})^{ \top}\Psi^{j}(\mathbf{w},b)]\) for \((\xi_{1},\mathbf{g}_{1}),...,(\xi_{m},\mathbf{g}_{m})\sim\overline{\mathcal{P} }_{j}\otimes\mathcal{N}(0,\mathbf{I}_{d})\) and: \[\begin{split}\Phi^{j}(\mathbf{x})=\frac{1}{\sqrt{m}}(\Gamma_{ \mathbf{g}_{1},\xi_{1}}^{1}(\mathbf{x}),...,\Gamma_{\mathbf{g}_{m},\xi_{m}}^{1 }(\mathbf{x}))^{\top},\\ \Psi^{j}(\mathbf{w},b)=\frac{1}{\sqrt{m}}(\Gamma_{\mathbf{g}_{1}, \xi_{1}}^{2}(\mathbf{w},b),...,\Gamma_{\mathbf{g}_{m},\xi_{m}}^{2}(\mathbf{w},b ))^{\top}\end{split} \tag{10}\] Several strategies can be used to construct samples \((\xi_{1},\mathbf{g}_{1}),...,(\xi_{m},\mathbf{g}_{m})\), e.g. iid sampling or block-iid sampling with a fixed \(\xi\) used within a block, but constructed independently for different blocks. In the experiments, we also choose: \(\rho(\xi)=2\pi i\xi\) and \(\eta(\xi)\)=1. The case of discrete \(\overline{\mathcal{P}}_{j}\) with finite number of atoms:Assume that \((\xi^{1},...,\xi^{K})\) is a sequence of atoms with the corresponding positive probabilities: \((p_{1},...,p_{K})\). Then one can also construct \(K\) pairs of RF-vectors \((\Phi^{j}(\mathbf{x};k),\Psi^{j}(\mathbf{w},b;k))_{k=1}^{K}\), each obtained by replacing \(\overline{\mathcal{P}}_{j}\) with a distribution corresponding to a deterministic constant \(p_{i}\) and get \(\Phi^{j}(\mathbf{x}),\Psi^{j}(\mathbf{w},b)\) by concatenating vectors from \((\Phi^{j}(\mathbf{x};k))_{k=1}^{K}\) and \((\Psi^{j}(\mathbf{w},b;k))_{k=1}^{K}\) respectively. This strategy is effective if \(K\) is small. Note that: \(f(\mathbf{x}^{\top}\mathbf{w}+b)=\sum_{j=0}^{3}\widehat{f}_{j}(\mathbf{x}, \mathbf{w},b)\) and thus \(\Phi_{f}(\mathbf{x})\) and \(\Psi_{f}(\mathbf{w},b)\) can be defined as: \[\Phi_{f}(\mathbf{x})=\mathrm{concat}\left(\left(\Phi^{j}(\mathbf{x})\right)_{j=0 }^{3}\right),\ \Psi_{f}(\mathbf{w},b)=\mathrm{concat}\left(\left(\Psi^{j}(\mathbf{w},b) \right)_{j=0}^{3}\right) \tag{11}\] for the vector concatenation operation \(\mathrm{concat}\), completing the description of the URF mechanism. **Remark 3.1** (boundedness): _We observe that for upper-bounded \(\|\mathbf{x}\|_{2},\|\mathbf{w}\|_{2},|b|\), the entries of \(\Phi_{f}(\mathbf{x})\) and \(\Psi_{f}(\mathbf{w},b)\) are also upper-bounded as long as \(A<0\). This follows directly from the formula for \(\Lambda_{\mathbf{g}}(\mathbf{z})\) in Eq. 8._ Trigonometric activation functions:Let us assume now that \(f(z)=\sin(z)\) or \(f(z)=\cos(z)\). Note that even though none of them has a Fourier Transform in the classical Riemannian sense, both have trivial Fourier Transforms in the broader distributional sense. To see that, we can rewrite both activations as: \(\sin(z)=\frac{exp(iz)-\exp(-iz)}{2i}\) and \(\cos(z)=\frac{exp(iz)+\exp(-iz)}{2}\). Therefore the corresponding distributions used in the URF derivations above become binary distributions over \(\{-\frac{1}{2\pi},\frac{1}{2\pi}\}\). This observation has interesting practical consequences, since it leads to the conceptually simple linearization of the FFLs applied in SIREN networks (see: Sec. 4.2). ### Beyond regular FFLs: the curious case of the ReLU-SNNK layer We also propose another SNNK layer which is not directly inspired by any known FFL, but turns out to work very well in practice (see: Sec. 4.3.2). In this case, the mappings \(\Phi\) and \(\Psi\) are defined as: \(\Phi(\mathbf{x})=\mathrm{ReLU}(\frac{1}{\sqrt{l}}\mathbf{G}\mathbf{x})\), \(\Psi(\mathbf{w},b)=\mathrm{ReLU}(\frac{1}{\sqrt{l}}\mathbf{G}\mathbf{w})\) for the Gaussian matrix: \(\mathbf{G}\in\mathbb{R}^{l\times d}\) with entries sampled independently at random from \(\mathcal{N}(0,1)\). One can ask a question what kernel does this pair of maps correspond to. It turns out that the answer is particularly elegant. **Theorem 3.2** (arc-cosine kernels; Cho & Saul (2011)): _The nth-order arc-cosine kernel \(\mathrm{K}_{n}:\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}\) is defined as: \(\mathrm{K}_{n}(\mathbf{x},\mathbf{y})=\frac{1}{n}\|\mathbf{x}\|_{n}^{2}\| \mathbf{y}\|_{2}^{2}J_{n}(\alpha_{\mathbf{x},\mathbf{y}})\), where \(\alpha_{\mathbf{x},\mathbf{y}}\in[0,\pi]\) stands for an angle between \(\mathbf{x}\) and \(\mathbf{y}\) and \(J(\theta)\stackrel{{\mathrm{def}}}{{=}}(-1)^{n}(\sin(\theta))^{n +1}\frac{\partial^{\pi}}{\partial\theta^{\pi}}\left(\frac{\pi-\theta}{\sin( \theta)}\right)\). Then, \(\mathrm{K}_{n}\) can be linearized as: \(\mathrm{K}_{n}(\mathbf{x},\mathbf{y})=2\mathbb{E}[\Gamma_{n}(\mathbf{x})^{\top }\Gamma_{n}(\mathbf{y})]\) for \(\Gamma_{n}(\mathbf{v})\stackrel{{\mathrm{def}}}{{=}}\mathrm{ReLU }((\mathbf{v}^{\top}\boldsymbol{\omega})^{n})\) and \(\boldsymbol{\omega}\thicksim\mathcal{N}(0,\mathbf{I}_{d})\)._ We conclude that our proposed \(\mathrm{ReLU}\)-\(\mathrm{SNNK}\) layer is a scalable version of the FFL defined as: \(\mathbf{x},\mathbf{W}\in\mathbb{R}^{l\times d},\mathbf{b}\rightarrow(\frac{1} {2}\mathrm{K}_{1}(\mathbf{w}^{1},\mathbf{x}),...,\frac{1}{2}\mathrm{K}_{1}( \mathbf{w}^{l},\mathbf{x}))^{\top}\). **Remark 3.3**: _The \(\mathrm{ReLU}\)-\(\mathrm{SNNK}\) layer is not a regular FFL since the values of its output dimensions cannot be re-written as \(f(\mathbf{x}^{\top}\mathbf{w}^{i}+b_{i})\) for some \(f:\mathbb{R}\rightarrow\mathbb{R}\) (interestingly, after \(\Gamma\)-base pre-processing, it can be still interpreted as a dot-product kernel). This shows that the SSNK mechanism is capable of modeling relationships beyond those of regular FFLs._ ### Bundling neural networks with SNNKs We are ready to propose the neural network _bundling process_, relying on the SSNK-primitives. Consider the following deep NN module with input \(\mathbf{x}=\mathbf{x}_{0}\in\mathbb{R}^{d_{0}}\) and output \(\mathbf{y}=\mathbf{x}_{L}\in\mathbb{R}^{d_{L}}\): \[\begin{cases}\mathbf{x}_{i+1}=f_{i+1}(\mathbf{W}_{i}\mathbf{x}_{i}+\mathbf{b} _{i})\text{; }i=0,...,L-1,\\ \mathbf{x}_{0}=\mathbf{x}\end{cases} \tag{12}\] for: (1) matrices \(\mathbf{W}_{i}\in\mathbb{R}^{d_{i+1}\times d_{i}}\), (2) bias vectors: \(\mathbf{b}_{i}\in\mathbb{R}^{d_{i+1}}\), and (3) activations: \(f_{i}:\mathbb{R}\rightarrow\mathbb{R}\). To understand how the bundling process works, we start by replacing first FFL in Eq. 12 with its SNNK analogue. We obtain the following computational block: \[\begin{cases}\widehat{\mathbf{x}}_{i+1}=\widehat{f}_{i+1}(\widehat{\mathbf{W} }_{i}\widehat{\mathbf{x}}_{i}+\widehat{\mathbf{b}}_{i})\text{ for }i=0,...,L-2,\\ \widehat{\mathbf{x}}_{0}=\Phi_{f_{1}}(\mathbf{x}_{0}),\\ \widehat{\mathbf{W}}_{0}=\mathbf{W}_{1}\Psi_{f_{1}}(\mathbf{W}_{0},\mathbf{b }_{0})\text{; }\widehat{\mathbf{W}}_{i}=\mathbf{W}_{i+1}\text{ for }i=1,...,L-2,\\ \widehat{f}_{i+1}=f_{i+2},\ \widehat{\mathbf{b}}_{i}=b_{i+1}\text{ for }i=0,...,L-2\end{cases} \tag{13}\] In the system of equations above, \(\Psi_{f}(\mathbf{W}_{0},\mathbf{b}_{0})\) is a matrix with transposed rows of the form: \(\Psi_{f}(\mathbf{W}_{0}^{j},\mathbf{b}_{0}^{j})\), where \(\mathbf{W}_{0}^{j}\) for \(j=0,...,d_{1}-1\) are the transposed rows of \(\mathbf{W}_{0}\) and \(\mathbf{b}_{0}=(\mathbf{b}_{0}^{0},...,\mathbf{b}_{0}^{d_{1}-1})^{\top}\). We have thus successfully replaced a module of \(L\) feedforward layers with a module of \((L-1)\) feedforward layers. By continuing this procedure, we can ultimately get rid of the FFLs and obtain an estimator \(\overline{\mathbf{y}}\) of \(\mathbf{y}\), given as: \(\overline{\mathbf{y}}=\overline{\mathbf{W}\mathbf{x}}\), where \[\begin{cases}\overline{\mathbf{x}}=\Phi_{f_{L}}\left(\Phi_{f_{L-1}}(...\Phi_{ f_{1}}(\mathbf{x}_{0})...)\right)\\ \overline{\mathbf{W}}=\Psi_{f_{L}}\left(\mathbf{W}_{L-1}\Psi_{f_{L-1}}(... \mathbf{W}_{2}\Psi_{f_{2}}(\mathbf{W}_{1}\Psi_{f_{1}}(\mathbf{W}_{0},\mathbf{b }_{0}),\mathbf{b}_{1})...,),\mathbf{b}_{L})\right)\in\mathbb{R}^{d_{L}\times m} \end{cases} \tag{14}\] This has several important consequences. In inference, replacing matrices \(\mathbf{W}_{0},...,\mathbf{W}_{L-1}\) with one matrix \(\overline{\mathbf{W}}\) is a effectively a compression scheme (that does not necessarily need to be applied to all the layers, but a particular consecutive set of layers of interest). If we apply bundling process to the entire deep neural network, we effectively provide its two-tower factorization with input disentangled from the parameters. In training, we can treat \(\overline{\mathbf{W}}\) as an unstructured parameter matrix and directly learn it (see results in Appendix H.3, table 5). Since the output \(\overline{\mathbf{y}}\) is now modeled as an action of the unstructured learnable matrix \(\overline{\mathbf{W}}\) on the _pre-processed_ input \(\overline{\mathbf{x}}\), for several loss functions there exists an explicit formula for the optimal \(\overline{\mathbf{W}}\). This is the case in particular for the standard regression loss (see discussion in Appendix H.3). If bundling is applied to a particular module, backpropagation through it is not necessary since there exists an explicit formula for the corresponding Jacobian. ## 4 Experiments We present an extensive empirical evaluation on SNNK on a wide range of experiments. More details on each of the experiments can be found in the Appendix E. ### Pointwise Kernel Estimation As a warm-up, we test the accuracy of the applied RF-mechanisms on synthetic data. We take \(d=2000\) and \(l=1\). We consider: **(a)** a SIREN-FFL with the activation function \(f(u)=\sin(u)\) and bias \(b=0.5\), **(b)** an arc-cosine-FFL from Sec. 3.2. The entries of the weight vectors \(\mathbf{w}\) and the inputs to the layers are taken independently from \(\frac{1}{\sqrt{d}}\text{Unif}(0,1)\). We report the mean relative error of the NN output (by averaging over \(s=500\) instantiations of the RF-mechanism) made by the RF-based estimator as well as the empirical standard deviation as a function of the number of random projections. This setup corresponds to quantifying the accuracy of the kernel estimator pointwise. The results are presented in Fig. 2 (g and h). Our SNNK provided an accurate approximation with a much smaller number of random projections than the dimensionality \(d\) of the input vectors. Figure 2: Architecture for **(a)** SNNK layer (see Section A), **(b)** SNNK-Adpt layer **(c)** image fitting (SIREN), MNIST and UCI experiments, **(d)** SNNK-QPNN model, **(e)** SNNK-inspired Adapter-ViT layer, **(f)** SNNK-inspired Adapter-BERT layer. **(g,h)**: The relative error (obtained by averaging over \(s=500\) instantiations of the RF-mechanism) made by the RF-based estimator on the particular entry of the output of the. **(g)** SIREN-FFL and **(h)** are-cosine-FFL as a function of the number of random projections \(p\) (see: Sec. 4.1). The maximum \(p\) for (g) is larger than for (h), as (g) in theory produces larger variance per random projection. The corresponding standard deviations are negligible: **(g)**\(5\cdot 10^{-8}\),\(10^{-12}\), \(5\cdot 10^{-8}\), \(10^{-8}\),\(10^{-12}\), \(2.5\cdot 10^{-9}\), \(10^{-12}\), \(5\cdot 10^{-9}\), \(10^{-12}\), \(10^{-12}\), \(10^{-10}\), \(10^{-12}\), **(h)**\(10^{-12}\), \(3\cdot 10^{-8}\),\(3\cdot 10^{-8}\),\(2\cdot 10^{-8}\), \(10^{-12}\), \(5\cdot 10^{-9}\). ### Toy Experiments SNNKs are versatile and can be used as a drop-in replacement for FFLs in a wide variety of NNs like the SIREN network Sitzmann et al. (2020), QPNN - a physics-inspired Neural Network (PINN) to solve the Hamiltonian for quantum physical systems (Sehanobish et al., 2021) and a simple multi-layer perceptron (MLP) for classification on MNIST (LeCun and Cortes, 2010). We use the sine activation variant for the first two experiments and the ReLU variant for MNIST. \(32\) random features are used for the solution of the 2-body problem and MNIST and 64 random features for the image fitting problem. We match the performance of the baseline NNs on the 2-body and the image fitting problem (see figure 3) and outperform the baseline on MNIST (Figure 9), while incurring lower training costs. For additional details regarding these experiments, see Appendix E.1. ### Finetuning Experiments In this subsection, we show how SNNKs can be used for parameter efficient finetuning. For experiments on text, we use the GLUE benchmark consisting of 8 different natural language understanding tasks (Wang et al., 2018). For vision tasks, we use CiFAR-10 and CiFAR-100 (Krizhevsky et al., 2009). BERT-base (Devlin et al., 2019) is used as a backbone for text experiments and ViT (Kolesnikov et al., 2021) for image experiments. Our code is built on top of Transformers (Wolf et al., 2020) and adapter Transformer library (Pfeiffer et al., 2020). Detailed comparisons with various baselines can be found in Appendix I and additional experiments can be found in Appendix H. #### 4.3.1 Linearizing the Pooler Layer in Transformers For text classification tasks, a SNNK layer can be used as a drop-in replacement for the pooler layer which is a linear layer with a tanh activation. For these set of experiments, the base model is frozen and only the pooler and the classifier weights are tuned. We get computational gains as the number of random features employed by SNNK is smaller than that of the hidden size of the Transformers. More details are presented in Appendix E.2. On GLUE dev set, our SNNK-linearized pooler models outperform the baselines on 5 out of 8 tasks (Table 1 (top half)). Additional results can be found in Appendix H. In this setting, the linearized pooler weights can be merged with the classifier weights to create a weight matrix of size (# number of random features \(\times\) number of classes) and then one can simply store the newly merged layer instead of separately storing the trained classifier and pooler layers. This dramatically _reduces_ the storage from 18.92 Megabit to only.**02** Megabit leading to a compres Figure 4: Comparison of trainable parameters between various layers/modules and the drop in replacement NNK layers. Results for CiFar-10 and CiFar-100 for SNNK-Adapter models. Figure 3: (1) **Left column** : Injecting SNNK in a PINN network to approximate the potential energy of the 2-body system. Top to bottom : Ground truth potential, Learned potential by QPNN (Sehanobish et al., 2021) and QPNN-SNNK. QPNN-SNNK can learn the potential function perfectly even using less trainable parameters than the baseline QPNN. (2) **Rightmost three column** : Siren network on the first row, fitting not only the image, but also the gradients. SNNK on the bottom row produces an accurate approximation of the above. sion factor of \(\mathbf{1/1000}\). More details are presented in Appendix C. Ablation studies on the number of random parameters for this experimental setting are presented in Appendix G. #### 4.3.2 SNNK-inspired Adapter Layers Adapters in Transformers were first introduced in (Houlsby et al., 2019) and there has been a lot of work designing different architectures (Pfeiffer et al., 2020; Karimi Mahabadi et al., 2021; Moosavi et al., 2022) and unifying various paradigms (Moosavi et al., 2022; He et al., 2022). Adapters are bottleneck MLPs which are (generally) added twice to each Transformer layer. In our work, we replace each adapter block by a single SNNK layer (Figure 2 (e) and (f)) using only \(\mathbf{16}\) random features resulting in a big drop of training parameters (see Figure 4). Figure 4 (b) shows the architecture of SNNK-inspired adapter layers. Additional details are presented in Appendix B. As is customary for adapter experiments, base model is frozen and only the adapters and classifier are tuned. Table 1 (bottom half) shows our results on using SNNK layers in place of adapters on the GLUE dev set. We outperform the baseline on \(5\) out of \(8\) datasets while employing only \(\mathbf{1/3}\) of the training parameters. On MNLI, it is noted in (Houlsby et al., 2019), that using smaller adapter size causes worse performance and performance boost can be achieved by increasing the size of the adapter (\(256\) is used in their case). Similar to this observation, we note that we can improve performance and match the baselines on large datasets (ex. MNLI, QNLI) as we increase the number of random features (see Figure 5). Our method also produces competitive performance on image datasets like Cifar-10 and Cifar-100 (see Figure 4 (right 2 figures)). Detailed comparisons with SOTA parameter efficient finetuning methods can be found in Table 6 (vision tasks) and in Table 7 (GLUE tasks). Moreover, we note that our methods are completely orthogonal to techniques such as gating mechanism in (Mao et al., 2022) or algorithms relying on dropping suitable adapter layers (Moosavi et al., 2022; Ruckle et al., 2021). Thus it can be easily combined with them. ### Experiments on UCI datasets We have conducted experiments with a variety of real-world datasets found in the UCI Machine Learning Repository (UCI MLR).1. We trained a three-layer MLP model as baseline (see Appendix Sec. F.5 for details). We varied the output of the middle-layer to train MLPs with different sizes. For Figure 5: Ablation with different number of random features for the ReLU-SNNK-adapter experiments on the GLUE dev set. \(AA\) is the reported adaptable adapter numbers in Moosavi et al. (2022). \begin{table} \begin{tabular}{l c c c c c c c c} \hline Dataset & RTL & MRPC & QUNLI & QQP & SST2 & MNLI & STSB & COLA \\ \hline End-baseline (J.e. et al., 2019) & 52.5 & 51.3 & 74.5 & **72.0** & 51.9 & **56.4** & 75.1 & 20.4 \\ \hline End-baseline (SNNK-based (ours)) & \(\mathbf{61.36\pm 1.15}\) & \(\mathbf{82.07\pm 1.07}\) & \(\mathbf{76.52\pm 0.27}\) & \(\mathbf{76.01\pm 0.17}\) & \(\mathbf{65.21\pm 0.34}\) & \(\mathbf{56.02\pm 0.27}\) & \(\mathbf{76.82\pm 0.27}\) & \(\mathbf{26.81\pm 0.30}\) \\ \hline End-baseline (J.e. et al., 2019) & \(\mathbf{61.39\pm 1.27}\) & \(\mathbf{51.90\pm 1.06}\) & \(\mathbf{90.30\pm 0.25}\) & \(\mathbf{88.09\pm 0.16}\) & \(\mathbf{91.31\pm 0.31}\) & \(\mathbf{52.09\pm 0.47}\) & \(\mathbf{83.52\pm 0.17}\) & \(\mathbf{51.41\pm 1.28}\) \\ \hline End-baseline (J.e. et al., 2019) & \(\mathbf{60.68\pm 1.24}\) & \(\mathbf{91.26\pm 1.39}\) & \(\mathbf{90.44\pm 0.16}\) & \(\mathbf{58.52\pm 0.23}\) & \(\mathbf{92.31\pm 0.27}\) & \(\mathbf{52.06\pm 0.17}\) & \(\mathbf{88.81\pm 0.14}\) & \(\mathbf{58.21\pm 0.63}\) \\ \hline \end{tabular} \end{table} Table 1: SNNK experiments on GLUE benchmarks. MCC score is reported for CoLA, F1 score is reported for MRPC and QQP, Spearman correlation is reported for STSB. Accuracy scores are reported for the other tasks. All results are obtained by averaging over 5 seeds. our method, we replace the middle-layer with SNNK (Figure 2 (c)). SNNK matches or outperforms the baseline while using only a fraction of the training parameters (Figure 6). ## 5 Conclusion We present scalable neural network kernels (SNNK), a novel efficient NN computational model that can be used to replace regular feedforwards layers in MLPs, where inputs and parameters are disentangled and connected only in the final computation via a dot-product kernel. We introduce a general mechanism of the universal random features (URFs) to instantiate SNNKs, show that SNNKs are capable of encoding subtle relationships between parameter- and input-vector beyond functions of their dot-products and finally, explain how they lead to the compactification of the NN stack via the so-called bundling process. We complement our theoretical findings with the exhaustive empirical analysis, from pointwise kernel estimation to training Transformers with adapters. ## Ethics Statement This paper focuses mostly on the algorithmic properties of the techniques linearizing kernels associated with feedfoward layers' calculations for the computational gains. The experiments with adapter-based fine-tuning of Transformers are presented to illustrate the main concepts. It should be noted though that Transformers should be used cautiously given their considerable computational footprint (improved though while adapters are applied) and the corresponding carbon footprint. ## Reproducibility Statement Hyperparameters to reproduce each experiment is detailed in section F. All the code to reproduce the experiments will be provided on acceptance of this preprint. ## Author Contributions AS and KC led the project. AS ran several empirical studies on GLUE, CiFar-10 and CiFar-100 datasets and proposed several strategies for efficiently using SNNK-layers within Transformer models. KC proposed FFL linearization-schemes, URFs, the bundling mechanism and implemented all linearization-schemes. YZ ran empirical studies on GLUE, CiFar-10 and CiFar-100 datasets. AD implemented and ran all UCI experiments, helped with GLUE/image experiments, proposed strategy for efficiently using SNNK-layers and created all figures in experiments. VL proposed an idea to linearize FFLs by disentangling inputs from weights. All authors contributed to the writing of the manuscript.
2309.02539
A Generalized Bandsplit Neural Network for Cinematic Audio Source Separation
Cinematic audio source separation is a relatively new subtask of audio source separation, with the aim of extracting the dialogue, music, and effects stems from their mixture. In this work, we developed a model generalizing the Bandsplit RNN for any complete or overcomplete partitions of the frequency axis. Psychoacoustically motivated frequency scales were used to inform the band definitions which are now defined with redundancy for more reliable feature extraction. A loss function motivated by the signal-to-noise ratio and the sparsity-promoting property of the 1-norm was proposed. We additionally exploit the information-sharing property of a common-encoder setup to reduce computational complexity during both training and inference, improve separation performance for hard-to-generalize classes of sounds, and allow flexibility during inference time with detachable decoders. Our best model sets the state of the art on the Divide and Remaster dataset with performance above the ideal ratio mask for the dialogue stem.
Karn N. Watcharasupat, Chih-Wei Wu, Yiwei Ding, Iroro Orife, Aaron J. Hipple, Phillip A. Williams, Scott Kramer, Alexander Lerch, William Wolcott
2023-09-05T19:19:22Z
http://arxiv.org/abs/2309.02539v3
# A Generalized Bandsplit Neural Network for Cimematic Audio Source Separation ###### Abstract Cinematic audio source separation is a relatively new subtask of audio source separation, with the aim of extracting the dialogue, music, and effects stems from their mixture. In this work, we developed a model generalizing the Bandsplit RNN for any complete or overcomplete partitions of the frequency axis. Psychoacoustically motivated frequency scales were used to inform the band definitions which are now defined with redundancy for more reliable feature extraction. A loss function motivated by the signal-to-noise ratio and the sparsity-promoting property of the 1-norm was proposed. We additionally exploit the information-sharing property of a common-encoder setup to reduce computational complexity during both training and inference, improve separation performance for hard-to-generalize classes of sounds, and allow flexibility during inference time with detachable decoders. Our best model sets the state of the art on the Divide and Remaster dataset with performance above the ideal ratio mask for the dialogue stem. Deep learning, psychoacoustical frequency scale, source separation, cinematic audio + Footnote †: journal: ICASSP-OJSP ## 1 Introduction Audio source separation refers to the task of separating an audio mixture into one or more of its constituent components. More formally, consider a set of source signals \(\mathfrak{U}=\{\mathbf{u}_{i}\colon\mathbf{u}_{i}[n]\in\mathbb{R}^{D_{i}},\ n \in[\![0,M_{i}]\!]\}\), where \(i\) is the source index, \(D_{i}\) is the number of channels in the \(i\)th source, \(n\) is the sample index, \(M_{i}\) is the number of samples in the \(i\)th source, and \([\![a,b]\!]=\mathbb{Z}\cap[\![a,b]\!]\). Not all of \(\mathfrak{U}\) may be necessarily 'desired'. The desired subset \(\mathfrak{T}\subseteq\mathfrak{U}\) is often referred to as the set of 'target' sources or stems, while the undesired subset \(\mathfrak{N}=\mathfrak{U}\backslash\mathfrak{T}\) is often referred to as the set of 'noise' sources. An input signal to a source separation (SS) system can usually be modeled as a mixing process \[\mathbf{x}=\sum_{i}\mathcal{T}_{i}(\mathbf{u}_{i})\in\mathbb{R}^{C\times N}, \tag{1}\] where \(C\) is the number of channels in the mixture, \(N\) is the number of samples in the mixture, and \(\mathcal{T}_{i}\colon\mathbb{R}^{D_{i}\times M_{i}}\mapsto\mathbb{R}^{C\times N}\) is an audio signal transformation on the \(i\)th source. Some common operations represented by \(\mathcal{T}_{i}\) are the identity transformation, which produces an instantaneous mixture often seen in synthetic data; a convolution, which produces a convolutive mixture often used to model a linear time-invariant (LTI) process; and a nonlinear transformation, often seen in music mixing process. The goal of an SS system is then to recover one, some, all, or composites of the elements of \(\mathfrak{T}\), up to some allowable deformation [1, 2]. Note, however, that (1) does not take into account global nonlinear operations such as dynamic compression. Composite targets are also often encountered in tasks such as music (e.g. the 'accompaniment' stem) or cinematic SS (e.g. the 'effects' stem), where the true number of component stems a composite target may contain can be fairly large. For simplicity concerning composite targets and multichannel sources, we will denote \(\mathfrak{S}=\{\mathbf{s}_{i}\colon\mathbf{s}_{i}=\sum_{j}\mathcal{T}_{j}( \mathbf{u}_{j}),\ \mathbf{u}_{j}\in\mathfrak{T},\ \mathbf{s}_{i}[n]\in\mathbb{R}^{C},\ n \in[\![0,N]\!]\}\) as the set of 'computational targets' of the algorithms. 'Targets' in this manuscript will refer to \(\mathfrak{S}\), as opposed to \(\mathfrak{T}\). Cinematic audio source separation (CASS) is a relatively new subtask of audio SS, most commonly concerned with extracting the dialogue, music, and effects stems from their mixture. Research traction in this new subtask can be credited to Petermann et al. [3, 4] and the Cinematic Sound Demixing track of the Sound Demixing Challenge [5], introduced in 2023. While the setup of the task can be easily generalized from standard SS setups, the nature of cinematic audio poses a unique problem not commonly seen in speech or music SS. Specifically, CASS is closely related to universal audio SS, in which nearly the entire ontological categories of audio (speech, music, sound of things, and environmental sounds) must be all retrieved with equal or similar importance. Moreover, the "music" and "effects" stems can be very non-homogeneous. Music can consist of sound made by a very wide variety of acoustic, electronic, and synthetic musical instruments. More challengingly, the effects stem consists of anything that is _not_ speech or music, but also sometimes consists of sounds made by musical instruments in a non-musical context. In this work, we adapted the Bandsplit RNN (BSRNN) [6] from the music SS task to the CASS task. In particular, we generalized the BSRNN architecture to potentially overlapping band definitions, introduced a loss function based on a combination of the 1-norm and the SNR loss, and modified the BSRNN from a set of single-stem models to a common-encoder system that can support any number of decoders. We further provide empirical results to demonstrate that the common-encoder setup provides superior results for hard-to-learn stems and allows generalization to previously untrained targets without the need for retraining the entire model. To the best of our knowledge, our proposed method1 is currently the state of the art on the Divide and Remaster (DnR) dataset [3]. Footnote 1: Replication code is available at github.com/karnwatcharasupat/bandit. ## II Related Work Most early audio SS research was originally focused on a mixture of speech signals, particularly due to the reliance on statistical signal processing and latent variable models [7], which do not work well with more complex audio signals such as music or environmental sounds. Specifically, most early systems [8, 9, 10] assume an LTI mixing process, allowing for retrieval of target stems by means of filtering [11], matrix (pseudo-)inversion for (over)determined systems \(C\geq D_{i}\)[12], or other similarity-based methods for underdetermined systems [13]. These methods, however, often require fairly strong assumptions on the source signals such as statistical independence, stationarity, and/or sparsity. As computational hardware became more powerful, more computationally complex methods also became viable. This allowed for the relaxation of many statistical requirements placed on the signals in pursuit of more data-driven methods and the possibility of performing SS on nonlinear mixtures of highly correlated stems. Time-frequency (TF) masking, in particular, became the dominant method of source extraction in deep SS [14]. While this has led to major improvements in extracted audio quality, it came at the sacrifice of the interpretability once enjoyed in latent variable models. Denote \(\mathbf{X}\in\mathbb{C}^{C\times F\times T}\) as the STFT of \(\mathbf{x}\), where \(F\) is the number of non-redundant frequency bins and \(T\) is the number of time frames. Similarly, denote \(\mathbf{S}_{i}\) as the STFT of the \(i\)th target source. Most masking SS systems use some form of \(\hat{\mathbf{S}}_{i}=\mathbf{X}\circ\mathbf{M}\), where \(\hat{\mathbf{S}}_{i}\) is the estimate of \(\mathbf{S}_{i}\), \(\circ\) is elementwise multiplication with broadcasting, and \(\mathbf{M}\) is the TF mask. Depending on the method, \(\mathbf{M}\) may be binary, real-valued, or complex-valued, and has the same TF shape as \(\mathbf{X}\), but may or may not be predicted separately for each channel. Although some works have generalized the masking operation to include additive components [15] or more complex operations [16], direct masking still remains the most common method of source extraction, particularly due to its direct connection with time-variant convolution in the time domain. Many deep architectures have been proposed to predict the TF masks: Open-Unmix [17] used bidirectional LSTM (BiLSTM) to obtain a magnitude mask; SepFormer [18] applied a transformer to predict masks for speech separation, improving the performance while allowing parallel computing; (Conv-)TasNet [19, 20] used masks on real-valued basis projections to allow real-time separation. Despite the popularity of mask-based methods, several works have explored mask-free architectures. Wave-U-Net [21] applies the U-Net structure to directly modify the mixture waveform. Built on Wave-U-Net, Demucs [22] incorporates a BiLSTM at the bottleneck. Hybrid Demucs [23] extends the idea of combining time and frequency domains by applying two separated U-Nets for each domain with a shared bottleneck BiLSTM for cross-domain information fusion. Hybrid Transformer Demucs [24] further improves the performance by replacing the BiLSTM bottleneck with a transformer bottleneck. KUIELab-MDX-Net [25] combines Demucs with a frequency-domain, U-Net-based architecture and uses a weighted average as the final output. Under the definition in (1), a number of non-generative audio enhancement tasks can also be considered special cases of audio SS, despite often not being actively thought of as one. Most non-generative implementations of noise suppression [26, 27], audio restoration [28], and dereverberation [29, 30] can be considered as an SS task with a noisy (and/or wet) mixture as input, and clean (and/or dry) target source as output. Dialogue enhancement often requires SS to extract the constituent stems before loudness adjustment is applied [31]. Extraction of the dialogue stem in CASS, in particular, can be seen as closely related to the task of speech enhancement, while that of the music-and-effects (M&E) stem can be seen as a speech suppression task. Among deep learning-based SS models, several common meta-architectures exist. Models such as Open-Unmix [17] and BSRNN [6] have one fully independent model for each stem, with no shared learnable layer. While this is very simple to train, fine-tune, and inference, the model suffers from the lack of information sharing between each stem-specific model. Adding additional stems to this system involves creating a completely separate network. Some systems, such as Demucs [23, 24] and Conv-TasNet [20], use one shared model for all stems. This means that training and inference must happen for all stems at the same time. This setup is perhaps the most beneficial in terms of information sharing, but it is also difficult to understand the flow of information within the system, as all intermediate representations are entangled up until the last layer. It can also be very difficult to add an additional stem to the model, as it is not trivial to decide which part of the model parameters may be safe to freeze or unfreeze. ## III Proposed Method Our proposed method builds upon the BSRNN model proposed in [6]. BSRNN itself is related to works that split the frequency bands into several different groups [32, 33], and those that apply multi-path recurrent networks to deal with long sequences [34, 35]. The original BSRNN is very similar in structure to our proposed model in Fig. 1, but with a separate model per stem. Each BSRNN model consists of a bandsplitting module, a TF modeling module, and a mask estimator. The bandsplitting module in [6] partitions an input spectrogram along its frequency axis into \(B\) disjoint "bands", then, in parallel, performs a normalization and an affine transformation for each band. Each affine transformation contains the same number of \(D\) output neurons. The TF module consists of a stack of bidirectional RNNs operating alternatingly along the time and band axes of the feature map. In [6], this consists of a stack of 12 pairs of residually-connected BiLSTMs. Finally, the mask estimation module consists of \(B\) parallel feedforward modules which produce \(B\) bandwise complex-valued masks. The overview of the proposed model is shown in Fig. 1. For clarity, BSRNN will only refer to the original model in [6]. Our proposed model will be referred to as "BandIt"2. Footnote 2: From **bandsplit**, and a reference to the multi-armed bandit problem. ### _Common Encoder_ In this work, we propose to use a common-encoder multiple-decoder system. By treating multi-stem SS as a multi-task problem, this is akin to hard parameter sharing. This system allows information sharing to occur freely in the encoder section, but not in the decoder. It is likely that this can improve the information efficiency, and generalizability of the model [36, 37]. A downside of this system is that adding a new decoder may or may not require the encoder to be retrained, depending on the generalizability of the feature maps after the initial training with the original set of stems. In addition to the potential information theoretic benefits, the common-encoder structure offers a more practical benefit in terms of the computational requirements. Training using the common encoder system can reduce the amount of parameters needed considerably, and thus reduce memory and hardware requirements. Additionally, in the case where not all decoders can be trained concurrently, simultaneous training can still be approximated by only attaching a subset of the decoders at each optimization step and alternating over them. Finally, this allows an arbitrary number of decoders to be attached and detached as needed during inference. As seen in Fig. 1, BSRNN can be modified into a common-encoder BandIt by sharing the all modules up to the TF modeling module and only splitting into stem-specific modules at the mask estimator section. Of course, many other possible points of splitting exist; we chose to split only after the TF modeling module in order to force it to learn a common representation that will work for all three stems. ### _Bandsplit Module_ The original definition of the bands in BSRNN has two clear attributes: (A1) the bandwidth in Hz generally increases with its constituent frequencies, and (A2) the number of bands is high in regions where the sources of a stem typically are most active in. From a data compression perspective, this translates to the assumption that (B1) information content per Hz decreases with increasing frequency, and (B2) information content is positively correlated to source activity. Both "priors" may seem trivial. However, the implementation can be tricky as we will discuss below. In [6], band definitions were mostly handcrafted for each stem. This potentially limits the generalizability of the model and makes architecture design difficult when dealing with stems with unpredictable, non-homogeneous content such as the "other" stem in MUSDB18 [38] and the effects stem in cinematic audio. In other words, the model is prone to prior mismatch when dealing with very diverse content. Moreover, the band definitions in [6] are all disjoint, i.e., each frequency bin is allocated to only one band. From a system reliability perspective, this means that the very first layer of BSRNN already has no redundancy provisioned; any loss of information occurring during the first affine transformation cannot Figure 1: Overview of the proposed model architecture, Bandit. be recovered by other parallel affine modules. This also disproportionally affects semantic structures (i.e. the "blobs" in spectrogram) that are located around the band edges, since they will be broken up into two disjoint bands, resulting in neither of which being able to encode their information well. To deal with these issues, we limit the prior assumption to only (B1), turning to psychoacoustically motivated band definitions in lieu of handcrafting. Additionally, we propose to add redundancy to the bandsplitting process in an attempt to reduce the amount of early information loss. Specifically, we will investigate five different band definitions based on four frequency scales with psychoacoustic motivations, namely, the mel scale, the equivalent rectangular band (ERB) scale, the Bark scale, and the 12-tone equal temperament (12-TET) Western musical scale. Note that we do not directly use the bandwidths associated with the ERB and the Bark scale, but rather take the scale value as a rough approximation of the number of critical bands below it. For all scale-filterbank combinations, the proposed splitting process is as follows. The minimum scale value \(z^{\text{min}}\) and the maximum \(z^{\text{max}}\) were computed. For all scales, \(z^{\text{max}}\) is given by \(z(0.5f_{\text{s}})\), where \(z\colon\mathbb{R}_{0}^{+}\mapsto\mathbb{R}\) is the mapping function from Hz to the scale's unit, and \(f_{\text{s}}\) is the sampling rate in Hz. For the mel, ERB, and Bark scales, \(z^{\text{min}}=0\). The \(z^{\text{min}}\) musical scale will be detailed later. For \(B\) bands, the center frequencies, in each respective scale, are given by \[\zeta_{n}=z(0.5f_{\text{s}})\cdot(n+1)\,/(B+2). \tag{2}\] The frequency weights \(\mathbf{W}\in[0,1]^{B\times F}\) are then computed using a filterbank of choice, and its weights normalized so that \(\sum_{b}\mathbf{W}[b,f]=1,\forall f\in[\![0,F]\!]\). Using the filterbank values, the band definitions are then created using a simple binarization criterion \[\mathfrak{F}_{b}=\{f\in[\![0,F]\!:\mathbf{W}[b,f]\!>0\},\ \forall b\in[\![0,B]\!]. \tag{3}\] We then define a subband \(\mathbf{X}_{b}\in\mathbb{C}^{C\times F_{b}\times T}\) of \(\mathbf{X}\) such that \[\mathbf{X}_{b}=\mathbf{X}[\!:_{c},\mathfrak{F}_{b},\!:_{t}],\ \forall b\in[\![0,B]\!]. \tag{4}\] The scales and the filterbanks used are detailed as follows, and visualized in Fig. 2. #### 2.2.1 Mel Scale The mel scale is one of the most used scales for the calculation of input features, such as the (log-)mel spectrogram and the mel-frequency cepstrum coefficients, for many audio tasks in machine learning and information retrieval. It is a measure of _tone height_[39]. In this work, we use the mel scale given in [40, p.128], where \[z_{\text{mel}}(f)=2595\log_{10}\left(1+f/700\right). \tag{5}\] The filterbank used is comprised of triangular-shaped filters with the \(b\)th filter having band edges \(\zeta_{b-1}\) and \(\zeta_{b+1}\), similar to the implementations in librosa [41] and PyTorch [42]. #### 2.2.2 Bark scale The Bark scale [43] "relates acoustical frequency to perceptual frequency resolution, in which one Bark covers one critical bandwidth [40, p.128]". Also known as the _critical band rate_, the Bark scale is constructed from the bandwidth of measured frequency groups [39]. Unlike the mel scale, the Bark scale is more concerned with the widths of the critical bands than the center frequencies themselves. In this work, we use the approximation [44] given by \[z_{\text{bark}}(f)=6\sinh^{-1}\left(f/600\right). \tag{6}\] For the Bark scale, we experimented with two filterbanks. One is a Bark filterbank implementation provided by Spafe [45], and another is a simple triangular filterbank similar to the mel and ERB scales. The former will be referred to as the "Bark" bands, and the latter as "TriBark". #### 2.2.3 Equivalent Rectangular Bandwidth Scale The equivalent rectangular bandwidth (ERB) was designed with a similar motivation to the Bark scale. The ERB is an approximation of the bandwidth of the human auditory filter at a given frequency. The ERB scale is a related scale that computes the number of ERBs below a certain frequency. The ERB scale can be modeled as [46] \[z_{\text{erb}}(f)=\ln\left(1+4.37\times 10^{-3}f\right)/(24.7\cdot 4.37\times 1 0^{-3}). \tag{7}\] The filterbank is computed similarly to that of the mel scale. #### 2.2.4 12-TET Western Musical Scale The 12-TET scale is the most common form of Western musical scale used today. Using a reference frequency of \(f_{\text{ref}}=440\,\mathrm{Hz}\), the unrounded MIDI note number of a Figure 2: Frequency ranges of each band, by band type, for a 64-band setup with a sampling rate of 44.1 kHz and an FFT size of 2048 samples. particular pitch can be represented by \[\tilde{z}_{\text{mus}}(f)=69+12\log_{2}\left(f/f_{\text{ref}}\right). \tag{8}\] Crucially, scaling a frequency by a factor of \(k\), always lead to a constant change in this scale by \(12\log_{2}k\), i.e., \[\tilde{z}_{\text{mus}}(kf)=\tilde{z}_{\text{mus}}(f)+12\log_{2}k. \tag{9}\] This ensures that the \(k\)th harmonic of a sound is always \(12\log_{2}k\) note numbers away from its fundamental, regardless of the fundamental pitch -- a property that mel, ERB, and Bark scales do not enjoy. In practice, since \(\tilde{z}_{\text{mus}}(f\to 0^{+})\to-\infty^{+}\), we instead set scale value as \[z_{\text{mus}}(f)=\max\left[z_{\text{mus}}^{\text{min}},\tilde{z}_{\text{mus }}\left(f\right)\right], \tag{10}\] where \(z_{\text{mus}}^{\text{min}}=\tilde{z}_{\text{mus}}\left(f_{\text{s}}/N_{ \text{FFT}}\right)\), and \(N_{\text{FFT}}\) is the FFT size. In this work, the filterbank for the musical scale is implemented using rectangular filters with the \(b\)th filter having band edges \(\zeta_{b-1}\) and \(\zeta_{b+1}\). All filters, except for the lowest and highest bands, have the same bandwidth in cents, before being discretized to match FFT bins. For brevity, we will refer to this band type simply as "musical". A comparison of the five proposed band definitions is shown in Fig. 2. ### Bandwise Feature Embedding After splitting, each of the subbands is viewed as a real-valued tensor in \(\mathbb{R}^{2CF\times T}\) by collapsing the channel and frequency axes and then concatenating its real and imaginary parts. As with BSRNN [6, Fig. 1b], each band is passed through a layer normalization and an affine transformation with \(D=128\) output units along the pseudo-frequency axis. The feature embedding process is denoted by \(\mathcal{P}_{b}\colon\mathbb{C}^{C\times F_{b}\times T}\mapsto\mathbb{R}^{D \times T}\). The bandwise feature tensors are then stacked to obtain the full-band feature tensor \(\mathbf{V}\in\mathbb{R}^{D\times B\times T}\) such that \(\mathbf{V}[:,b,:]=\mathcal{P}_{b}(\mathbf{X}_{b})\), \(\forall b\in[\![0,B]\!)\). Except for the Bark model, the feature embedding module accounts for approximately 600 k parameters in a 64-band setup. ### Time Frequency Modeling As with BSRNN [6, Fig. 1c], the feature tensor \(\mathbf{V}\) is passed through a series of residual recurrent neural networks (RNNs) with affine projection, alternating its operation between the time and frequency axes. In this work, we reduced the number of residual RNN pairs from 12 to 8 and also opted to use Gated Recurrent Units (GRUs) instead of Long-Short Term Memory (LSTM) units as the RNN backbone. As with [6], each RNN has \(2D\) hidden units. The overall operation of this module is represented by the transformation \(\mathcal{R}\colon\mathbb{R}^{D\times B\times T}\mapsto\mathbb{R}^{D\times B \times T}\) to obtain the output \(\mathbf{\Lambda}=\mathcal{R}\left(\mathbf{V}\right)\in\mathbb{R}^{D\times B \times T}\). TF modeling with 8 residual GRU pairs accounts for 10.5 M trainable parameters3. Footnote 3: Due to the computational complexity of backpropagation through time with long sequences, we experimented with replacing the RNNs with transformer encoders or convolutional layers. With similar numbers of parameters and all else being equal, these were not able to match the performance of an RNN-based module. ### Overlapping Mask Estimation and Recombination At this stage, the shared feature \(\mathbf{\Lambda}\) is passed to a separate mask estimator for each stem. The internal implementation of the mask estimation module is identical to that of the original BSRNN. The overall operation of this module is represented by \(\mathcal{Q}_{b,\text{re}}^{(i)},\mathcal{Q}_{b,\text{im}}^{(i)}\colon\mathbb{R }^{D\times B\times T}\mapsto\mathbb{R}^{C\times F_{b}\times T}\) to obtain the bandwise mask \[\mathbf{M}_{b}^{(i)}=\mathcal{Q}_{b,\text{re}}^{(i)}(\mathbf{\Lambda}_{b})+ \jmath\mathcal{Q}_{b,\text{im}}^{(i)}(\mathbf{\Lambda}_{b})\in\mathbb{C}^{C \times F_{b}\times T}. \tag{11}\] With overlapping bands, however, the full-band mask can no longer be trivially obtained using stacking. We used weighted recombination to obtain \(\mathbf{M}^{(i)}\in\mathbb{C}^{C\times F\times T}\), such that \[\mathbf{M}^{(i)}[c,f,t]=\sum_{b}\mathbf{W}_{b}[f]\cdot\mathbf{M}_{b}^{(i)}[c,f-\min\mathfrak{F}_{b},t] \tag{12}\] A simplified illustration with two bands is shown in Fig. 3. Note that while \(\mathbf{W}_{b}\) is used as the recombination weight, it is possible to not use any weight as \(\mathbf{W}_{b}\) or more appropriate weights can be learned by the model and be absorbed into \(\mathbf{M}_{b}^{(i)}\). In other words, the role of \(\mathbf{W}_{b}\) in the mask estimation module is more of an initialization than a fixed parameter. Except for the Bark model with very wide bandwidths thus a higher number of parameters, the mask estimation module accounts for roughly 25 M parameters in a 64-band setup.4 Footnote 4: We have also attempted a combination of multiplicative and additive masks in this work. However, we found that the inclusion of the additive mask did not lead to any appreciable improvement. We hypothesize that the channel capacity of the model is simply insufficient to reconstruct a sufficiently good full-resolution additive spectrogram, as a non-zero additive will only lead to more artifacts. ### Loss function We initially experimented with the loss function originally used in [6, 47], whose stem-wise contribution is given by \[\mathcal{L}_{p}^{(i)}=\|\hat{\mathbf{s}}_{i}-\mathbf{s}_{i}\|_{p}+\|\Re[\hat{ \mathbf{S}}_{i}-\mathbf{S}_{i}]\|_{p}+\|\Im[\hat{\mathbf{S}}_{i}-\mathbf{S}_{ i}]\|_{p}, \tag{13}\] and \(p=1\). While calculating the loss for the real and imaginary parts separately may seem like a somewhat inequgant approximation, there is a desirable gradient behavior that justifies doing so over calculating a norm of complex differences. Consider \(\mathbf{y}=\mathbf{u}+\jmath\mathbf{v}\) and \(\tilde{\mathbf{y}}=\tilde{\mathbf{u}}+\jmath\tilde{\mathbf{v}}\). The gradient of the 1-norm of a complex difference vector gives \[\partial\|\hat{\mathbf{y}}-\mathbf{y}\|_{1}=\sum\nolimits_{i}\frac{(\hat{u}_{ i}-u_{i})\partial\hat{u}_{i}+(\hat{v}_{i}-v_{i})\partial\hat{v}_{i}}{\sqrt{(\hat{u}_{ i}-u_{i})^{2}+(\hat{v}_{i}-v_{i})^{2}}}. \tag{14}\] This indicates that the gradient \(\partial\hat{u}_{i}\) will be scaled down if the error on \(\hat{v}_{i}\) is high and vice versa, diluting the Figure 3: **A simplified illustration of overlapping mask recombination.** sparseness-encouraging property of a \(1\)-norm. On the other hand, treating the real and imaginary parts separately yields \[\partial\left(\|\hat{\mathbf{u}}-\mathbf{u}\|_{1}+\|\hat{\mathbf{v}} -\mathbf{v}\|_{1}\right)\\ =\sum\nolimits_{i}\mathrm{sgn}(\hat{u}_{i}-u_{i})\partial\hat{u} _{i}+\mathrm{sgn}(\hat{v}_{i}-v_{i})\partial\hat{v}_{i}, \tag{15}\] which enjoys the same sparsity benefit of a \(1\)-norm for real-valued differences. Both acoustically and perceptually, however, the magnitudes of both the time-domain signal and the STFT follow a logarithmic scale. Each of the stems can also have very different energies due to foreground (e.g., dialogue) sources conventionally being mixed louder than background (e.g., music and effects) sources. Inspired by the success of negative signal-to-noise ratio (SNR) as a loss function, we experimented with a generalization to a \(p\)-norm that tackles both of these issues, i.e., \[\mathcal{D}_{p}(\hat{\mathbf{y}};\mathbf{y})=10\log_{10}\left[(\|\hat{\mathbf{ y}}-\mathbf{y}\|_{p}^{p}+\epsilon)/(\|\mathbf{y}\|_{p}^{p}+\epsilon)\right], \tag{16}\] where \(\epsilon\) is a stabilizing constant, setting the minimum of the distance to \(-10\log_{10}(\epsilon^{-1}\|\mathbf{y}\|_{p}^{p}+1)\), which is numerically stable for \(\epsilon\not\ll\|\mathbf{y}\|_{p}^{p}\). In this work, we set \(\epsilon=10^{-3}\). Analyzing the differential of \(\mathcal{D}_{p}\) gives \[\partial\mathcal{D}_{p}=\log_{10}(e^{10})\cdot(\|\hat{\mathbf{y}}-\mathbf{y} \|_{p}^{p}+\epsilon)^{-1}\cdot\partial\|\hat{\mathbf{y}}-\mathbf{y}\|_{p}^{p} \tag{17}\] which allows the model to take smaller updates when it is less confident, and larger updates once it is more confident. Gradient explosion is prevented by \(\epsilon\) since the magnitude of the gradients cannot rapidly increase once \(\|\hat{\mathbf{y}}-\mathbf{y}\|_{p}\ll\epsilon\). Note also the importance of \(p\) on the differential, since \[\partial\mathcal{D}_{1}(\hat{\mathbf{y}};\mathbf{y}) =\frac{\log_{10}(e^{10})}{\|\hat{\mathbf{y}}-\mathbf{y}\|_{1}+ \epsilon}\sum_{i}\mathrm{sgn}(\hat{y}_{i}-y_{i})\cdot\partial\hat{y}_{i}, \tag{18}\] \[\partial\mathcal{D}_{2}(\hat{\mathbf{y}};\mathbf{y}) =\frac{2\log_{10}(e^{10})}{\|\hat{\mathbf{y}}-\mathbf{y}\|_{2}^{ 2}+\epsilon}\sum_{i}(\hat{y}_{i}-y_{i})\cdot\partial\hat{y}_{i}. \tag{19}\] While both differentials were globally modulated by the inverse norm of the error in both cases, \(\partial\mathcal{D}_{2}\) is more prone to outliers in the early stage of training and to the vanishing gradient problem in the later stage due to the elementwise multiplier of \(\partial\hat{y}_{i}\) being dependent on the elementwise error magnitude. On the other hand, the elementwise multiplier in \(\partial\mathcal{D}_{1}\) only depends on the sign of the error and thus does not suffer from either problem. Combining \(\mathcal{D}_{1}\) with the original loss function gives \[\mathcal{L}_{\text{proposed}}=\mathcal{D}_{1}(\hat{\mathbf{s}};\mathbf{s})+ \mathcal{D}_{1}(\Re\hat{\mathbf{S}};\Re\mathbf{S})+\mathcal{D}_{1}(\Im\hat{ \mathbf{S}};\Im\mathbf{S}), \tag{20}\] which we will refer to as the proposed "L1SNR" loss. In practice, care must be taken to ensure that the DFT used in the STFT is normalized such that all loss terms are on a similar scale, or appropriate weightings should be used. ## IV Experimental Setup ### Dataset Most of the experiments in this work will focus on the Divide and Remaster (DnR) dataset [3]. The DnR dataset is a three-stem dataset consisting of the dialogue, music, and effects stems. Each track is 60 s long, single-channel, and provided at two sample rates of 16 kHz and 44.1 kHz. In this work, we will only focus on the high-fidelity sample rate. The dialogue data were obtained from LibriVox, an English-only audiobook reading. Music data were taken from the Free Music Archive (FMA). Foreground and background effects data were taken from FSD50k. As mentioned in CDX [5], the dialogue data is not as diverse as real motion picture audio, due to the lack of emotional and linguistic diversity. Dialogue data diversity is particularly an issue when seeking high-fidelity speech sampled at 44.1 kHz and above; our own initial attempt to augment the DnR dataset with more languages and emotions required unexpectedly significant effort and was deferred to future work. ### Chunking Since each track of the DnR dataset is relatively long, the tracks were chunked during training and inference. During training, random 6 s chunks of the tracks are drawn on the fly. During validation, chunks were drawn exhaustively with a length of 6 s and a hop size of 1 s. During testing, we chunk the full signal into 6 s chunks with a hop size of 0.5 s. Inference is performed independently on each chunk before they are recombined with Hann-windowed overlap-add. The 6 s chunk size was originally chosen for compatibility with the original BSRNN implementation. It was also the largest chunk size we could fit into an NVIDIA A10G GPU with a per-GPU batch size of at least two, as a per-GPU batch size of one caused significant instability during backpropagation. ### Training Unless otherwise stated, all models were trained using an Adam optimizer for 100 epochs. The learning rate is initialized to \(10^{-3}\) with a decay factor of 0.98 every two epochs. Norm-based gradient clipping was additionally enabled with a threshold of 5. Each training epoch is set to 20 k samples regardless of the dataset size. As additional points of comparison, we trained our adaptation of the Hybrid Demucs [23] and Open-Unmix (umxhqt-like) [17] for the 3-stem problem. The loss function for each model follows that of the respective original paper, while the data processing is identical to our proposed method. BandIt, BSRNN, and Demucs models were trained on a g5.48xlarge Amazon EC2 instance with 8 NVIDIA A10G GPUs (24 GB each). Training was done with PyTorch Lightning using a distributed data-parallel strategy with a batch size of 2 per GPU. Open-Unmix model was trained on a g4dn.4xlarge Amazon EC2 instance with a single NVIDIA T4 GPU (16 GB) with a batch size of 16. BandIt models each took roughly 1.5 days to complete 100 epochs of training. ### Metrics In this work, we report the signal-to-noise ratio (SNR) and scale-invariant SNR (SI-SNR) [2]. Note that the commonly reported signal-to-distortion ratio (SDR) and its scale invariant counterpart (SI-SDR) are mathematically identical to SNR and SI-SNR, respectively, when the appropriate version of SDR is used [2]. To avoid ambiguity, we will simply report the "SNR" and the "SI-SNR". ## V Results and Discussion The main experimental results (SSV-A through SSV-D) are presented in Table 1. In addition to our proposed method, we trained and evaluated our own baselines with Open-Unmix [17] and Hybrid Demucs (a.k.a. Demucs v3) [23] on DnR. Results for the MRX and MRX-C models are reproduced as-is from [4] and are marked with \(\triangle\) to indicate so. We also provide oracle results based on the mixture, the ideal ratio mask, and the phase-sensitive filter [48]. ### _Reducing Time-Frequency Modeling Complexity_ The first modification made to the original BSRNN (BSRNN-LSTM12) was to reduce the complexity of the time-frequency modeling module. Switching from LSTM to GRU and cutting the stack size down from 12 pairs to 8 pairs (BSRNN-GRU8) showed nearly no changes to the performance on average. While the GRU-based model performed slightly worse for dialogue, it performed better with effects than the LSTM-based modules. This switch allowed us to significantly cut down the parameters by almost 40 %, while also reducing the considerable memory footprint during backpropagation. For this experiment, we used the Vocals V7 band definition from the original paper, which was used for both the "vocals" and "other" stem in MUSDB18, hence making it the most appropriate multi-purpose band definition for this analysis. ### _Common Encoder_ The next modification was to merge the encoder section, that is, all modules up to and including the TF modeling module, into a shared system for all stems. This further cut the parameters down by 45 % from BSRNN-GRU8. Again, the performance of this common-encoder model (BandIt) is still very similar to either BSRNN system on average. More interestingly, the performance in the effects stem increased by about 1 dB compared to BSRNN-LSTM12, but this is also accompanied by a drop of about 1 dB in dialogue stem performance. This seems to indicate that there is a slight competition in dynamically allocating information from three stems into the shared embedding. Qualitatively, however, speech is known to be easier to detect and semantically segment than effects due to the former being less acoustically diverse and more bandlimited on average. As such, since the speech performance at around 13 dB is closer to the oracle performance, we consider the improvement in the effects stem performance of higher importance. ### _Loss Function_ The next experiment is concerned with choosing the most appropriate loss function for the system. We experimented with 4 loss functions: the L1 loss, the mean squared error (MSE) loss, the proposed L1SNR loss, and the 2-norm ablation (L2SNR) of L1SNR. All loss functions were applied in the time domain, the real part of the spectrogram, and the imaginary part of the spectrogram like in (20). Note that the distance function used in L2SNR is practically identical to commonly used negative SNR loss. Training on L1SNR loss achieved the highest performance, with at least 0.7 dB higher performance compared to L1 and L2SNR losses across all stems; the latter two performed similarly across all stems. MSE loss performed worst as expected, given that it has the weakest sparsity-encouraging property across the four losses. The order of the performance corroborates with our analyses in Section III.F, but more thorough experiments will be needed in a separate work to fully verify our hypothesis. ### _Band Definitions_ We look into the five proposed overlapping-band definitions. For each band, we experimented with 48-band and 64-band variants. The 48-band variant has a larger input bandwidth per band but fewer neurons provisioned per linear frequency. Overall, the 64-band version consistently outperformed the corresponding 48-band counterpart of the same band type. Mel, TriBark, and ERB models tend to perform similarly. The similarity in performance between the three band types is not too surprising, given the similarity in both their nonlinear frequency transforms and filterbanks (see also Fig. 2). In a 64-band setting, all band types performed better than the ideal ratio mask in the dialogue stem. In both 48- and 64-band settings, the musical band performed the best. We hypothesize that this is due to its underlying musical scale containing significantly more nonlinear-frequency units in the lower linear-frequency region than the other three scales, thus more channel capacity was provisioned to the information-dense lower linear-frequency region. For the best model at 100 epochs (Music 64), we let the model continue to train until the validation loss no longer improves for 20 epochs. This was achieved at epoch 278, with a total training time of about 4.3 days. Per-epoch improvements after the first 100 epochs were very small, but accumulated to about 0.5 dB improvement across all stems after the additional 178 epochs. The performance of this model (BandIt+) is also shown in Table 1. ### _Generalizability_ We additionally tested the generalizability of the feature map learned by the encoder. This is done by freezing the encoder from the BandIt model with 64 musical bands and attaching a new randomly initialized decoder for an output stem that was not directly learned in the original 3-stem training. We first tested the generalizability on an "easier" task of obtaining the music-and-effects stem. Using the sum of the original music and effects stems outputs, the SNR and SI-SNR are at 13.9 dB and 13.7 dB, respectively. Training a new decoder for the composite stem achieves a slightly better output at 14.1 dB for SNR and 13.9 dB for SI-SNR. Next, we trained new decoders on completely unseen music data from MUSDB18-HQ [38]5. Note that MUSDB18 provides stereo data and the encoder was only trained on mono signals, so each channel of the music data was passed through the encoder independently. Despite only being trained to separate music as a whole without caring about its constituent instrumentals, the representations from the frozen encoder were sufficient to train decoders that are on par in performance to Open-Unmix, as shown in Table 2. Footnote 5: The use of MUSDB18 here is strictly for the demonstration of model generalizability, and will not be used commercially. ### Computational Complexity While the BandIt models have achieved state-of-the-art performance with lower overall complexity than BSRNN, it is important to note that the inference-time Flops count of a 64-band BandIt remains significantly higher than Hybrid Demucs, despite the latter having higher parameter counts, partially due to the RNN-heavy backbone of BandIt. Using 6-second chunk inputs on a machine with an Intel Core i9-11900K CPU and an NVIDIA GeForce RTX 3090 GPU, Demucs processed about 17.0 chunks per second on GPU while BandIt did so at about 8.7 chunks per second. On CPU, Demucs did so at about 1.1 chunks per second, while BandIt did so at about 0.3 chunks per second. The peak memory usage of BandIt at about 650 MB is slightly higher that than of Demucs at about 550 MB. ## VI Conclusion In this work, we propose BandIt, a generalization of the Bandsplit RNN to any complete or overcomplete partitions of the frequency axis. By also introducing a shared-encoder, a 1-norm SNR-like loss function, and psychoacoustically motivated band definitions, BandIt achieves state-of-the-art performance in CASS with fewer parameters than the original BSRNN or Hybrid Demucs. Future work includes more in-depth analysis of the behavior of the proposed loss function, deriving more information-theoretically optimal band definitions, and extending the work to more realistic audio data with more emotional, linguistic, and spatial diversity. ## Acknowledgment The authors would like to thank Jordan Gilman, Kyle Swanson, Mark Vulfson, and Pablo Delgado for their assistance. \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{**Model**} & & & & \multicolumn{2}{c}{**Dialogue**} & \multicolumn{2}{c}{**Music**} & \multicolumn{2}{c}{**Effects**} & \multicolumn{2}{c}{**Averaged**} \\ \hline **Backbone** & **Encoder** & **Bands** & **Loss** & **Params.** & **GFlops** & **SNR** & **SI-SNR** & **SNR** & **SI-SNR** & **SNR** & **SI-SNR** & **SNR** & **SI-SNR** \\ \hline BSRNN-LSTM12 & Separate & Vocals V7 & T+RTTF L1 & 77.4M & 1386.5 & 14.2 & 14.0 & 6.3 & 5.2 & 7.0 & 5.9 & 9.2 & 8.4 \\ BSRNN-GRU8 & Separate & Vocals V7 & T+RTTF L1 & 47.4M & 714.5 & 14.0 & 13.9 & 6.4 & 5.2 & 7.2 & 6.2 & 9.2 & 8.4 \\ \hline BandIt & Shared & Vocals V7 & T+RTTF L1 & 25.7M & 243.2 & 13.3 & 13.0 & 6.4 & 5.3 & 7.8 & 6.9 & 9.2 & 8.4 \\ & Vocals V7 & T+RTTF MSE & 25.7M & 243.2 & 12.5 & 12.2 & 5.5 & 4.1 & 7.0 & 6.0 & 8.3 & 7.4 \\ & Vocals V7 & T+RTTF L1SNR & 25.7M & 243.2 & 14.2 & 14.0 & 7.2 & 6.3 & 8.5 & 7.8 & 10.0 & 9.4 \\ & Vocals V7 & T+RTTF L2SNR & 25.7M & 243.2 & 13.5 & 13.3 & 6.5 & 5.4 & 7.9 & 7.1 & 9.3 & 8.6 \\ \cline{2-13} & Bark 48 & T+RTTF L1SNR & 64.5M & 290.6 & 14.1 & 14.0 & 7.3 & 6.3 & 8.6 & 7.8 & 10.0 & 9.4 \\ & Mel 48 & T+RTTF L1SNR & 32.8M & 274.3 & 14.5 & 14.3 & 7.5 & 6.6 & 8.8 & 8.1 & 10.3 & 9.7 \\ & TriBark 48 & T+RTTF L1SNR & 32.7M & 274.2 & 14.6 & 14.5 & 7.6 & 6.7 & 8.9 & 8.2 & 10.4 & 9.8 \\ & EBR 48 & T+RTTF L1SNR & 32.6M & 274.2 & 14.6 & 14.4 & 7.7 & 6.8 & 8.9 & 8.5 & 10.4 & 9.8 \\ & Music 48 & T+RTTF L1SNR & 33.5M & 274.7 & 14.8 & 14.6 & 7.9 & 7.1 & 9.2 & 8.5 & 10.6 & 10.1 \\ \cline{2-13} & Mel 64 & T+RTTF L1SNR & 36.1M & 363.6 & 14.8 & 14.7 & 7.9 & 7.1 & 9.1 & 8.5 & 10.6 & 10.1 \\ & TriBark 64 & T+RTTF L1SNR & 36.0M & 363.5 & 15.0 & **14.9** & 8.0 & 7.2 & 9.2 & 8.6 & 10.8 & 10.2 \\ & Bark 64 & T+RTTF L1SNR & 82.6M & 387.6 & 15.0 & **14.9** & 8.1 & 7.3 & **9.3** & 8.6 & 10.6 & **10.3** \\ & Music 64 & T+RTTFTF L1SNR & 37.0M & 364.1 & **15.1** & **14.9** & **8.2** & **7.4** & **9.3** & **8.7** & **10.9** & **10.3** \\ \hline BandIt+ Shared & Music 64 & T+RTTFTF L1SNR & 37.0M & 364.1 & 15.7 & 15.6 & 8.7 & 8.0 & 9.8 & 8.2 & 11.4 & 10.9 \\ \hline Open-Unmix (umxhq) & TF Mag. MSE & 22.1M & 5.7 & 11.6 & 11.3 & 4.9 & 3.2 & 5.8 & 4.4 & 7.4 & 6.3 \\ MRX\({}^{\Delta}\) & Time SI-SDR & N/R & N/R & — & 12.3 & — & 4.2 & — & 5.7 & — & 7.4 \\ MRX-C\({}^{\Delta}\) & Time SI-SDR & N/R & N/R & — & 12.6 & — & 4.6 & — & 6.1 & — & 7.8 \\ Hybrid Demucs (v3) & Time L1 & 83.6M & 85.0 & 13.6 & 13.4 & 6.0 & 4.7 & 7.2 & 6.1 & 8.9 & 8.1 \\ \hline _Mixture_ & & — & — & 1.0 & 1.0 & -6.8 & -6.8 & -5.0 & -5.0 & -3.6 & -3.6 \\ _Ideal Ratio Mask_ & & — & — & — & 14.4 & 14.6 & 9.0 & 8.4 & 11.0 & 10.7 & 11.5 & 11.2 \\ _Phase Sensitive Filter_ & & — & — & — & 18.5 & 18.4 & 12.9 & 12.7 & 15.0 & 14.8 & 15.4 & 15.3 \\ \hline \hline \end{tabular} \end{table} TABLE 1: **Model performance on the Dnf test set. Floating-point operation count is based on 6-second input at 44.1 kHz** \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline Model & Vocals & Drums & Bass & Other & Average \\ \hline BandIt (Music 64, frozen enc.) & 5.5 & **6.4** & **4.4** & **3.6** & **5.0** \\ Open-Unmix (umxhq) & **6.0** & 5.6 & **4.4** & 3.4 & 4.9 \\ \hline \hline \end{tabular} \end{table} TABLE 2: **SNR (dB) Performance on MUSDB18-HQ Test Set.**
2304.09490
Neural Network Quantisation for Faster Homomorphic Encryption
Homomorphic encryption (HE) enables calculating on encrypted data, which makes it possible to perform privacypreserving neural network inference. One disadvantage of this technique is that it is several orders of magnitudes slower than calculation on unencrypted data. Neural networks are commonly trained using floating-point, while most homomorphic encryption libraries calculate on integers, thus requiring a quantisation of the neural network. A straightforward approach would be to quantise to large integer sizes (e.g. 32 bit) to avoid large quantisation errors. In this work, we reduce the integer sizes of the networks, using quantisation-aware training, to allow more efficient computations. For the targeted MNIST architecture proposed by Badawi et al., we reduce the integer sizes by 33% without significant loss of accuracy, while for the CIFAR architecture, we can reduce the integer sizes by 43%. Implementing the resulting networks under the BFV homomorphic encryption scheme using SEAL, we could reduce the execution time of an MNIST neural network by 80% and by 40% for a CIFAR neural network.
Wouter Legiest, Jan-Pieter D'Anvers, Furkan Turan, Michiel Van Beirendonck, Ingrid Verbauwhede
2023-04-19T08:22:28Z
http://arxiv.org/abs/2304.09490v2
# Neural Network Quantisation ###### Abstract Homomorphic encryption (HE) enables calculating on encrypted data, which makes it possible to perform privacy-preserving neural network inference. One disadvantage of this technique is that it is several orders of magnitudes slower than calculation on unencrypted data. Neural networks are commonly trained using floating-point, while most homomorphic encryption libraries calculate on integers, thus requiring a quantisation of the neural network. A straightforward approach would be to quantisate to large integer sizes (e.g. \(32\,\mathrm{bit}\)) to avoid large quantisation errors. In this work, we reduce the integer sizes of the networks, using quantisation-aware training, to allow more efficient computations. For the targeted MNIST architecture proposed by Badawi et al. [1], we reduce the integer sizes by 33% without significant loss of accuracy, while for the CIFAR architecture, we can reduce the integer sizes by 43%. Implementing the resulting networks under the BFV homomorphic encryption scheme using SEAL, we could reduce the execution time of an MNIST neural network by 80% and by 40% for a CIFAR neural network. convolutional neural networks, quantisation, privacy-preserving machine learning, fully homomorphic encryption ## I Introduction Homomorphic encryption (HE) allows performing calculations on encrypted data. This technique enables applications where data is processed in untrusted environments (e.g. a cloud environment) while ensuring that this environment does not learn anything about the data itself. As such, it is a promising technique to make privacy-preserving machine learning possible. A downside of HE is that it significantly increases the size of encrypted data. As a result, encrypted operations are typically several orders of magnitude slower than their unencrypted counterparts. This work tries to accelerate neural network inference under homomorphic encryption by using quantisation techniques to reduce the data size and, thus, the computational cost. Neural network frameworks generally use a floating-point representation to represent network parameters and intermediate variables. However, HE systems like BFV [2] encode only integers, requiring an additional conversion step to convert the floating-point neural network parameters to the integer HE variables. While it is possible to design neural networks that work solely with integer representations, previous works have only studied such networks in a non-HE related context [3, 4, 5]. In addition, this conversion is an essential step before porting it to hardware. For instance, a plaintext \(32\,\mathrm{bit}\) floating-point addition is \(30\times\) more energy-consuming 1 than an \(8\,\mathrm{bit}\) integer equivalent [6]. By using the conversion, we can select smaller HE parameters that lead to limited resource use and better management of corner cases. Therefore, making the behaviour of the system faster and more predictable in general. Footnote 1: Energy consumption using a \(45\,\mathrm{nm}\) CMOS technology. As calculations in these non-HE integer-only networks are performed, the sizes of the integer variables increase. The intermediate values are commonly scaled down to a smaller number after each layer to keep these integer-only networks manageable. This means the most significant bits are held after each operation, while the least significant are discarded. Unfortunately, these reduction operations are based on division or shift operations, which natively are not supported in HE schemes, so downscaling cannot easily be performed. Therefore, in neural network HE inferences, the intermediate values will grow throughout the inference and the final calculations will need to operate on very large integers. For instance, when all of the weights of a neural network are converted to \(32\,\mathrm{bit}\), a 10-layer CIFAR network will produce integers with bit-sizes up to \(614\,\mathrm{bit}\). The maximum bit-length of these output integers will be denoted as the _'final integer width'_ (FIW), and we will show that this value significantly affects the overall computation cost. Gilad-Bachrach et al. [7] implemented the first artificial feedforward neural network under homomorphic encryption using the HE scheme YASHE [8]. Note that an attack proposed by Albrecht et al. [9] reduced the security level of this scheme and is therefore considered broken in practice. Gilad-Bachrach et al. [7] proposed a specialised, HE-focussed _CryptoNets_ architecture for the MNIST dataset [10]. One of the downsides of the CPU implementations of CryptoNets is the high latency of \(250\,\mathrm{sec}\) for an MNIST image. It was improved by Brutzkus et al. [11] with the Low-Latency CryptoNets (LoLa) architecture. Using the BFV scheme, optimisations in the underlying HE library SEAL and a different approach to representing the ciphertext data, a latency of \(0.29\,\mathrm{sec}\) was reached, an improvement of \(93\times\) relative to CryptoNets. In addition, Brutzkus et al. [11] proposes variants of the LoLa network for processing the CIFAR-10 dataset [12]. They report an accuracy of 74.1% and a latency of \(730\,\mathrm{sec}\). Badawi et al. [1] implemented the BFV scheme on GPUs. They propose two architectures, one smaller for MNIST and one more extensive for CIFAR. Accordingly, their CIFAR network boasts an accuracy of 77.55% and a latency of \(304.43\,\mathrm{sec}\). In this work, we improve upon the state-of-the-art HE neural networks by considering advanced neural network quantisation techniques. We first investigate post-training quantisation, a method typically used in the state-of-the-art, and show that there is a limit to how many intermediate variables can be scaled down without significantly affecting accuracy. We then show that quantisation-aware training can indeed be used to substantially scale down these intermediate variables without a similar accuracy penalty. In the end, we reduced the final integer width with 33% for MNIST and 43% for CIFAR, allowing a speedup with factors 80% and 40%, respectively, over typical 8-bit post-training quantisation networks as used in the state-of-the-art. ## II Preliminaries ### _Homomorphic encryption_ Homomorphic encryption enables performing arithmetic operations on encrypted data. Take the following example: consider an asymmetric encryption system with two integer numbers \(x\) and \(y\). They can be encrypted by the using encryption key pk to \(c_{x}\!=\!\mathsf{Enc}(\mathsf{pk},\!x)\) and \(c_{y}\!=\!\mathsf{Enc}(\mathsf{pk},\!y)\). These two ciphertexts are sent to a server that cannot be trusted. The server can perform an operation \(\diamondsuit\) on both ciphertext \(c_{xy}\!=\!c_{x}\!\diamondsuit c_{y}\), which is the equivalent of doing an addition on the plaintexts. The result of this operation is then sent back to the user, who can decrypt this message. Using the decryption key sk, the resulting plaintext message \(z\) is obtained by \(z\!=\!\mathsf{Dec}(\mathsf{sk},\!c_{xy})\). The message \(z\) will have a value of \(z\!=\!x\!+\!y\). Altogether, the server does not obtain any information about the integers \(x\) and \(y\) while possessing the unencrypted data. A limitation of this form of encryption is that it only allows certain operations, i.e. addition or multiplication of two ciphertexts. Execution of non-linear functions is normally performed using a polynomial approximation that uses only addition, subtraction, and multiplication. Moreover, a division in the HE schemes CKKS and BFV is theoretically possible, but it is costly and thus avoided in practice [13]. The biggest problem with the lack of a division operation is that variables will grow during computation. For example, when multiplying two \(8\,\mathrm{bit}\) integers, the result becomes roughly \(16\,\mathrm{bit}\). In unencrypted neural network implementations, this variable can be divided with a power of two to get back to \(8\,\mathrm{bit}\), making it more manageable for the next layer. However, such an operation is not possible in encrypted neural network inference. This leads to large intermediate and output integers. The maximum bit-length of these output integers will be denoted as the 'final integer width' (FIW). The HE scheme must be instantiated with more extensive parameters to accommodate these larger variables, which comes at a significant cost. Once a specific variable size is reached, additional techniques are required to allow large representations. More specifically, to ensure a correct representation in the plaintext space during inference, a residue numeral system (RNS), following the Chinese remainder theorem, is used to divide the large numbers into several smaller numbers. This leads to several smaller HE instances that could be run in parallel. Since each instance consumes computing resources, decreasing the variable sizes can significantly reduce the number of RNS instances and, thus, the computational cost. ### _Neural network_ A neural network is a machine learning technique consisting of a network of small interconnected computation units called neurons. These neurons can be adapted, which enables the network to 'learn' a specific, human-like task such as classifications of images. A neuron will take a number of inputs, perform a weighted sum over these inputs, and output a function of the result of this sum. Neurons are grouped to form layers, and different behaviour can be obtained depending on their configuration. Since a typical division or non-linear function cannot be executed trivially under FHE, we use a slightly adapted version of the classical neural network layers. A dense or convolutional layer is representable under FHE. However, the activation function is approximated by a \(f(x)\!=\!x^{2}\) square function, resulting in a _Square layer_. Moreover, the _scaled average pooling_ layer is replaced by an equivalent where the inputs are summed, but the division is omitted. ### _Architectures_ This work uses the two architectures developed by Badawi et al. [1] for homomorphic inference. These networks are used as test cases to research the effect of quantisation on homomorphic encryption inference. Both architectures omit the last (Sigmoid) activation function since it only maps the output to the unit interval. For a detailed description of the network, we refer the reader to the paper of Badawi et al. [1]. The first architecture used in this paper focuses on the MNIST dataset [10]. It is based on the HCNN [1] and consists of two convolutional, two square activation and one dense layer. The authors stated an accuracy of 99% for this architecture. The second architecture is designed to classify the more complex CIFAR-10 dataset [12]. The 10-layered network especially uses the scaled average pooling and square layer. The originally proposed HCNN architecture was slightly modified in our implementation by not using padding, as this only results in reduced accuracy: our floating-point model model obtains an accuracy of 73.28%, while the original HCNN reports 77.8%. ## III Post-Training Quantisation (PTQ) Usually, floating-point numbers with single or double precision are used to represent the weights and biases of a network. However, it is possible to convert these numbers to \(8\,\mathrm{bit}\) integers without a notable reduction in accuracy [14]. A further reduction in the representation might have a more detrimental effect on the neural network accuracy. Converting an existing (floating-point) neural network into a quantised (integer) version is called post-training quantisation (PTQ). During PTQ quantisation, a real value \(r\!\in\![\alpha,\beta]\) is converted to a \(b\)-bit integer \(q\). The process is determined by two factors: the zero-point \(Z\) and the scale factor \(S\), using the following formula: \[q\!=\!\left\lfloor\frac{r}{S}\!+\!Z\right\rfloor\!. \tag{1}\] Dequantisation can be done through the formula \(r\!=\!S(q\!-\!Z)\), where the quantised value is converted back to its original scale. The scale factor \(S\) determines the quantisation step size. The zero-point \(Z\) is the quantised value \(q\) corresponding to the real value \(r\!=\!0\) and positions the range of representable numbers optimally. When \(Z\!\neq\!0\), we say the quantisation is asymmetric or affine. This quantisation explicitly uses the zero point, often set at \(Z\!=\!-\alpha\cdot(2^{b}-1)/(\beta-\alpha)\). A second option is a symmetric quantisation, which reduces the overhead of dealing with the zero point by setting it to zero. Commonly, the values are mapped to a signed symmetric interval \([\alpha,\beta]\!=\![-2^{b-1},\!2^{b-1}\!-\!1]\), although an unsigned interval is also possible. Symmetric quantisation is a more limited but easier-to-handle quantisation technique. To evaluate the effect of quantisation, we first determined the distribution of the parameters of both MNIST and CIFAR networks, plotted in Figure 1. Since both networks possess a symmetric distribution, the mean of the values is zero, and thus symmetric quantisation is the best candidate to convert the signed real numbers for both networks. To determine the ideal scale factor for the weights, three candidates are tested: The first scale factor \(S\!=\!1/(2^{b-1}\!-\!1)\) only considers the bit width. No account is taken of the size or distribution of the real numbers. The second scaling factor \(S\!=\!\max(|\mathbf{W}|)/(2^{b-1}\!-\!1)\) considers the largest absolute value that can be represented in the quantised interval, and extreme values that are outside the quantisation range are quantised to the edges of the quantisation interval. To understand the influence of these different scale factors on the network, we build a Python framework that evaluates the effect of post-training quantisation on the accuracy and FIW. The framework takes a neural network, converts each of the weights to an integer representation and then executes a neural network inference. We process each of the \(10\,000\) images in the test set for these experiments to determine the accuracy and FIW. The maximum of all individual final integer widths and corresponding accuracies are reported in Table I. Furthermore, we also reduce the sizes of the input coefficients. This way, we can obtain an even lower final integer width. In all the experiments, the MNIST data is scaled down from its typical \(8\,\mathrm{bit}\) to \(2\,\mathrm{bit}\). However, since CIFAR images are more complex, the same reduction could lead to unacceptable accuracies. Therefore we chose not to reduce the CIFAR dataset. The results in Table I show that for both networks, we can quantise until \(8\,\mathrm{bit}\) without an accuracy drop. When we want to use lower quantisation, the accuracy starts to drop. One of the reasons is that in these cases, many of the weights are quantised to zero, which causes much of information to 'disappear' and results in a diminished FIW. ## IV Quantisation-aware training (QAT) In the previous section, we showed that neural networks could be quantised to \(8\,\mathrm{bit}\) integers, but the accuracy reduces for a more drastic quantisation. To reduce the bit width of the network further, we can make the network aware of the quantisation during its training. Before training is started, the quantisation technique and parameters are chosen and introduced into the training graphs as 'fake quantisation' nodes, which simulate the low-precision behaviour of the quantisation. These nodes quantise a real input using Equation 1 and immediately perform a dequantisation immediately afterwards, thus injecting an error that the quantisation would cause. Depending on the quantisation used, this method can result in a network with approximately the same accuracy as a full-precision network while using low-precision parameters. Using Brevitas [15], we trained the same networks as in the post-training quantisation experiments of the previous section. Brevitas is a library to develop and train quantisation-aware hardware-ready networks. We used its 'weights-only quantisation process', in which a quantisation error is exclusively injected in the weights. The results can be seen on the right in Table I. For lower bit widths, the accuracy of the QAT is significantly better than the PTQ case, remaining approximately the same as the full precision network. One of the reasons is that quantisation-aware training will prevent parameter sparsity and ensure that each parameter is used correctly. Table II compares our QAT network to earlier works. Notable is that the CryptoNets and HCNN implementations use PTQ techniques, but no QAT techniques. Using QAT, we can quantise the weights to as low as \(2\,\mathrm{bit}\), giving the network a much lower FIW with minimal to no drop in accuracy. Compared to a full precision network, i.e. quantising the parameters to \(32\,\mathrm{bit}\) integers, the FIW is reduced with a factor of 8.2 for the MNIST network and a factor of 5 for the CIFAR network. Compared to the numbers presented by HCNN, our smallest networks have a 33% and 43% smaller final integer width for MNIST and CIFAR, respectively, while boasting similar accuracy. ## V Evaluation In this section, we evaluate our newly developed quantised neural networks by implementing them using the Pyfhel [16] library, which is a software package that provides python-bindings for Microsoft's SEAL library [17]. An encrypted inference is executed using the integer-based BFV scheme. All of our tests are run using Python 3.9.13, Brevitas 0.7.1, Pyfhel 3.3.1 (using SEAL 3.7), on an Intel Xeon Silver 4208 CPU. One of the most compelling optimisations in certain HE schemes was the introduction of batching or packing, as described by Smart and Vercauteren [18]. It provides a way to pack multiple plaintext messages into a single ciphertext as if it were a vector of plaintexts. In our implementation, we use batching to pack each input channel into a single ciphertext. A single ciphertext is used for a (black-and-white) MNIST image, and three ciphertexts are needed for an (RGB) CIFAR image. Due to batching, we cannot implement a dot-based matrix-vector multiplication since we need access to the individual elements of a ciphertext. Therefore rotation-based versions of each neural network layer are implemented based on previous works. Dathathri et al. [19] propose an algorithm to calculate a single convolution kernel on a subset of the input data. We adapted the algorithm further to enable us to simultaneously apply an input kernel to a complete channel. As proposed by Juvekar et al. [20], a rotation-based algorithm is used to Fig. 1: Overview of the weight distribution of the MNIST and CIFAR architecture. execute a matrix-vector multiplication for the dense layer. This algorithm will perform the multiplication using only vector addition, multiplications and rotations. When converting to an almost binary size (\(2\,\mathrm{bit}\)), extra sparsity is introduced, which we use to reduce the latency. Before encoding the weight during the encrypted inference, we check for a zero vector. If there is one, all the associated operations using this vector can be omitted. This results in a speedup of 28% between a \(4\,\mathrm{bit}\) and \(2\,\mathrm{bit}\) network. ### _Homomorphic encryption parameter selection_ To determine and select suitable HE parameters, we first analyse the final integer width that determines whether we need multiple instances. The SEAL library limits the maximum size of the plaintext modulus to \(60\,\mathrm{bit}\) for performance reasons. Due to the outcomes of our QAT experiments, we need to represent larger plaintext spaces and use a residue numeral system (RNS). An overview of the used HE parameters is given in Table III. ### _Results_ We report the sequential times for various quantisation on the right in Table II. To account for the number of instances, the'sequential time' is given, corresponding to the total time when each instance is executed sequentially and reflecting the use of computing resources. For the CIFAR network, the work of Badawi et al. [1] uses ten instances, each possessing a plaintext size of around \(21\,\mathrm{bit}\). Using the same sizes, our smallest network (\(2\,\mathrm{bit}\)) only requires six instances. The MNIST architecture's smallest network is 80% faster than the best \(8\,\mathrm{bit}\) PTQ network. This is due to the smaller FIW and because it uses the additional sparsity of the weights. As for the CIFAR architecture, we obtain a 40% speedup compared to the \(8\,\mathrm{bit}\) PTQ network, which is equal to the quantisation used by HCNN. ## VI Conclusion The absence of a division operation in some fully homomorphic encryption schemes implies that variables keep growing during computations. In this work, we tested two main quantisation techniques to reduce the size of the internal variables, which in turn affects computation cost. We first looked at the limitations of post-training quantisation and showed that there is a lower limit to the quantisation (in our case \(8\,\mathrm{bit}\)) before the accuracy significantly drops. To further reduce the variable sizes, we developed a quantisation-aware training framework. We reduced the final integer width with 33% for MNIST and 43% for CIFAR, compared to the state-of-the-art HCNN architecture. In our experiments, the quantisation aware training, allowing for reducing the network weights up to \(2\,\mathrm{bit}\), boasts an 80% and 40% speedup for the MNIST and CIFAR network, respectively, over typical \(8\,\mathrm{bit}\) weights obtained with post-training quantisation. \begin{table} \begin{tabular}{l l l l l|l l l} \hline \hline \multicolumn{2}{c}{Network} & \multicolumn{1}{c}{Quantum} & \multicolumn{1}{c|}{\begin{tabular}{c} Acc. \\ [\%] \\ \end{tabular} } & \multicolumn{1}{c|}{\begin{tabular}{c} FIW \\ [\(\,\mathrm{bit}\)] \\ \end{tabular} } & \multicolumn{1}{c|}{\begin{tabular}{c} Seq. \\ time \\ [min.] \\ \end{tabular} } & \multicolumn{1}{c}{ \begin{tabular}{c} No. of \\ inst. \\ \end{tabular} } \\ \hline \multirow{6}{*}{MNIST} & CryptoNets & [7] & \(5\)-\(10\,\mathrm{bit}\) & 99 & 80 & - & 2 \\ & HCNN & [1] & \(4\,\mathrm{bit}\) & 99 & 43 & - & 1 \\ \cline{2-7} & Our Work & \(32\,\mathrm{bit}\) & 98.6 & 238 & 98.45 & 7 \\ & Our Work & \(8\,\mathrm{bit}\) & 98.51 & 70 & 41.63 & 3 \\ & Our Work & \(4\,\mathrm{bit}\) & 98.65 & 45 & 12.2 & 1 \\ & Our Work & \(2\,\mathrm{bit}\) & 98.46 & 29 & 8.78 & 1 \\ \hline \multirow{6}{*}{MNIST} & LoLa & [11] & \(8\)-\(9\,\mathrm{bit}\) & 74.1 & 93 & - & 4 \\ & HCNN & [1] & \(8\,\mathrm{bit}\) & 77.55 & 218 & - & 10 \\ \cline{1-1} \cline{2-7} & Our Work & \(32\,\mathrm{bit}\) & 73.28 & 614 & 12801 & 30 \\ \cline{1-1} & Our Work & \(8\,\mathrm{bit}\) & 73.04 & 205 & 4267 & 10 \\ \cline{1-1} & Our Work & \(4\,\mathrm{bit}\) & 72.49 & 153 & 3413 & 8 \\ \cline{1-1} & Our work & \(2\,\mathrm{bit}\) & 69.14 & 124 & 2560 & 6 \\ \hline \hline \end{tabular} \end{table} TABLE II: Results of the quantisation-aware training and homomorphic inference. \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline \multicolumn{2}{c}{} & & \multicolumn{2}{c}{\begin{tabular}{c} PTQ \(-S\!=\!\frac{\max([\mathbf{W}])}{2^{8}-1}\) \\ \end{tabular} } & \multicolumn{2}{c}{\begin{tabular}{c} PTQ \(-S\!=\!\frac{\beta-\alpha}{2^{8}-1}\) \\ \end{tabular} } & \multicolumn{2}{c}{\begin{tabular}{c} PTQ \(-S\!=\!\frac{1}{2^{8}-1}\) \\ \end{tabular} } & \multicolumn{2}{c}{ \begin{tabular}{c} QAT \\ \end{tabular} } \\ \cline{2-7} & Quantisation & Acc [\%] & FIW [\(\,\mathrm{bit}\)] & Acc [\%] & FIW [\(\,\mathrm{bit}\)] & Acc [\%] & FIW [\(\,\mathrm{bit}\)] & Acc [\%] & FIW [\(\,\mathrm{bit}\)] \\ \hline \multirow{6}{*}{MNIST} & \(32\,\mathrm{bit}\) & 98.43 & 237 & 98.43 & 231 & 98.43 & 233 & 98.41 & 238 \\ & \(8\,\mathrm{bit}\) & 98.41 & 69 & 98.44 & 63 & 98.44 & 64 & 98.3 & 70 \\ & \(3\,\mathrm{bit}\) & 94.39 & 32 & 44.03 & 26 & 79.04 & 27 & 98.3 & 38 \\ & \(2\,\mathrm{bit}\) & 14.4 & 20 & 11.35 & 3 & 11.53 & 6 & 98.46 & 29 \\ \hline \multirow{6}{*}{CIFAR} & \(32\,\mathrm{bit}\) & 73.09 & 583 & 73.09 & 571 & 73.09 & 570 & 73.28 & 614 \\ & \(8\,\mathrm{bit}\) & 73.0 & 202 & 73.18 & 187 & 73.09 & 186 & 73.04 & 205 \\ \cline{1-1} & \(4\,\mathrm{bit}\) & 52.24 & 135 & 18.67 & 123 & 9.96 & 128 & 72.49 & 153 \\ \hline \hline \end{tabular} \end{table} TABLE I: Results of the quantised model using Brevitas and post-training quantisation with different scale factors for the architectures for both MNIST and CIFAR. \begin{table} \begin{tabular}{l l l l l} \hline \hline Network & Quantisation & N & \(\log q\) & Plaintext modulus \\ \hline \multirow{6}{*}{MNIST} & \(8\,\mathrm{bit}\) & \(2^{14}\) & 389 & 35184371138561, \\ & \(4\,\mathrm{bit}\) & \(2^{14}\) & 389 & 35184371138561 \\ & \(2\,\mathrm{bit}\) & \(2^{14}\) & 389 & 1073643521 \\ \hline \multirow{6}{*}{MNIST} & \(8\,\mathrm{bit}\) & \(2^{15}\) & 825 & Same as \(4\,\mathrm{bit}\) + \\ & & & & & 8257537, 6946817 \\ \cline{1-1} & \(4\,\mathrm{bit}\) & \(2^{15}\) & 825 & Same as \(2\,\mathrm{bit}\) + \\ \cline{1-1} & \(2\,\mathrm{bit}\) & \(2^{15}\) & 825 & 1376257, 1769473, 2424833, \\ \cline{1-1} & & & & & 2752513, 3604481, 3735553 \\ \hline \hline \end{tabular} \end{table} TABLE III: Used HE parameters
2305.00776
Characterizing Exceptional Points Using Neural Networks
One of the key features of non-Hermitian systems is the occurrence of exceptional points (EPs), spectral degeneracies where the eigenvalues and eigenvectors merge. In this work, we propose applying neural networks to characterize EPs by introducing a new feature -- summed phase rigidity (SPR). We consider different models with varying degrees of complexity to illustrate our approach, and show how to predict EPs for two-site and four-site gain and loss models. Further, we demonstrate an accurate EP prediction in the paradigmatic Hatano-Nelson model for a variable number of sites. Remarkably, we show how SPR enables a prediction of EPs of orders completely unseen by the training data. Our method can be useful to characterize EPs in an automated manner using machine learning approaches.
Md. Afsar Reja, Awadhesh Narayan
2023-05-01T11:39:03Z
http://arxiv.org/abs/2305.00776v3
# Characterizing Exceptional Points Using Neural Networks ###### Abstract One of the key features of non-Hermitian systems is the occurrence of exceptional points (EPs), spectral degeneracies where the eigenvalues and eigenvectors merge. In this work, we propose applying neural networks to characterize EPs by introducing a new feature - _summed phase rigidity_ (SPR). We consider different models with varying degrees of complexity to illustrate our approach, and show how to predict EPs for two-site and four-site gain and loss models. Further, we demonstrate an accurate EP prediction in the paradigmatic Hatano-Nelson model for a variable number of sites. Remarkably, we show how SPR enables a prediction of EPs of orders completely unseen by the training data. Our method can be useful to characterize EPs in an automated manner using machine learning approaches. _Introduction-_ In recent years, the exploration of non-Hermitian systems has been gaining wide interest [1; 2; 3; 4; 5; 6]. This is due to a large number of inherent rich physical phenomena, such as exceptional points (EPs), non-Hermitian skin effect (NHSE), non-Bloch band theory, exotic topological phases, and extended symmetry classes, which have no counterpart in the contemporary Hermitian realm. One of the most intriguing characteristics of non-Hermitian systems is the existence of EPs - spectral degeneracy points at which not only the eigenvalues merge but also the eigenstates coalesce simultaneously [7; 8]. At EPs, the Hamiltonian matrix becomes defective. EPs play an essential role in NHSE and topological phases of non-Hermitian systems [9]. In addition to harbouring numerous fundamentally interesting features, EPs have already given rise to several applications such as enhanced sensing at higher-order EPs [10], in optical microcavities [11], and directional lasing [12]. EPs have been realized in various experimental setups such as optics [13; 14], photonics [15], electric circuits [16; 17] and acoustics [18; 19]. Due to its diverse applications and unique learning capabilities, machine learning (ML) is rapidly being adopted by researchers as a novel tool. In particular, ML has the potential to uncover new physics without prior human assistance. The growing trend of the application of ML techniques is not only restricted to different areas of condensed matter physics [20], but are also being explored in other areas such as particle physics, cosmology, and quantum computing [21]. In past years, ML techniques are being applied with outstanding accuracy to study topological phases [22; 23; 24; 25] and topological invariants [26; 27], classification of topological phases [24; 28], and phase transitions [29] in the Hermitian realm. Very recently, the first applications of ML techniques to non-Hermitian systems have been undertaken. For instance, Narayan _et al._[30], Cheng _et al._[31] and Zhang _et al._[32] have undertaken the study of non-Hermitian topological phases using convolutional neural networks (NNs). Yu _et al._ have used an unsupervised method, namely diffusion maps, to explore such phases [33]. Non-Hermitian topological phases in photonics have been studied using manifold clustering [34]. Furthermore, Araki and co-authors have analysed NHSE in an ML framework [35]. Moreover, ML approaches have been recently explored to investigate various non-Hermitian experimental platforms. Examples include, physics-graph-informed ML to study second-order NHSE in topoelectrical circuits [36], principal component analysis and NNs to explore non-Hermitian photonics [37], and diffusion maps to analyse non-Hermitian knotted phases in solid state spin systems [38]. In this work, we propose and demonstrate the characterization of EPs using NNs. We introduce a new feature, which we term _summed phase rigidity_ (SPR), which allows an unambiguous characterization of higher-order EPs. As an illustration of our approach, starting with simple models, we have categorised the EPs in various systems with increasing complexity with the help of NNs. In particular, we show how EPs can be distinguished in two- and four-site models by means of NNs. Furthermore, our NN construction and SPR enables an accurate prediction of the order of the EPs. Finally, using the celebrated Hatano-Nelson model for a variable number of sites, we demonstrate an accurate prediction of EPs and show how SPR allows a prediction of EPs of orders completely unseen by the training data. Our approach is useful for the characterization of EPs in an automated manner using ML techniques. _Two-site model-_ We first consider the simple two-site non-Hermitian model with on-site gain and loss to illustrate our approach [see Fig. 1(a)]. The system is described by the following Hamiltonian [39] \[H_{2}=\begin{pmatrix}\omega_{0}+i\gamma&J\\ J&\omega_{0}-i\gamma\end{pmatrix}, \tag{1}\] where \(\omega_{0}\) is the onsite potential, \(J\) is the coupling between the sites as shown in Fig. 1(a), and the non-Hermiticity is introduced by the gain and loss term \(\pm i\gamma\). The eigenvalues are given by \(\lambda_{\pm}=\omega_{0}\pm\sqrt{J^{2}-\gamma^{2}}\). For simplicity, we choose \(\omega_{0}=0\), and the EP occurs at \(\gamma=\pm J\). Note that this model can host EPs of order two only. We briefly describe the concept of phase rigidity, based on which we will classify the EPs. The phase rigidity, \(r\), at any point in the parameter space of a given Hamiltonian is defined as [40; 41] \[r=\frac{\langle\Psi_{L}|\Psi_{R}\rangle}{\langle\Psi_{R}|\Psi_{R}\rangle}, \tag{2}\] where \(\Psi_{L}\) and \(\Psi_{R}\) are left and right eigenstates of the non-Hermitian Hamiltonian. Due to bi-orthogonality, \(r\) takes a value of zero at EPs and a value of unity far from the EP. Instead of \(r\), which ranges from zero to one, we propose to take the negative log of \(r\), i.e., \(-\ln|r|\). This change of scale essentially enhances the separation between EPs and non-EPs in the parameter space - this leads to a much-improved characterization of EPs as we will show. So at an EP, this quantity takes a large positive value, and far from an EP, it drops to zero. We choose 40,000 randomly generated points in the \(\gamma-J\) plane for our two-site model and compute \(-ln|r|\). Then, in a similar manner, we produced 10,000 points such that \(\gamma=J\) and combined them with the points from above. Among this generated data, we kept 10% as test data, and the rest are used as training data. We constructed a network consisting of two hidden layers with 16 and 4 neurons per layer, respectively. The total number of trainable parameters is 121. We used the rectified linear unit (ReLU) as the activation function for the hidden layers. The loss (mean squared error loss) curve for 200 epochs with a batch size of 32 is shown in Fig. 1(b). We note that convergence is quite fast (within nearly 50 epochs) and the loss for the test set does not fluctuate too much or deviate from the training set loss curve, indicating the sign of a good fit. After training is performed by optimizing various hyperparameters (see Table 1), we predict the order of EPs for the test set. On the other hand, for this simple illustrative case, we already know the true order of the EPs. We compare the true and the predicted results for the same data points in the \(\gamma-J\) plane, as shown in Fig. 1(c) and Fig. 1(d). We observe that our NN predictions are almost identical to the true values with an accuracy of 99%, based on which we can easily classify whether a given point is an EP or not. We also calculated the performance of the network by means of the \(R^{2}\) score. This is presented in Table 1 and is close to 0.968. _Four-site model_- Next, we consider a slightly more involved scenario in a four-site model, which may host both second and fourth-order EPs. The Hamiltonian reads [42] \[H_{4}=\begin{pmatrix}i\delta&p&0&0\\ p&i\gamma&q&0\\ 0&q&-i\gamma&p\\ 0&0&p&-i\delta\end{pmatrix}. \tag{3}\] Here \(p\) denotes the coupling between sites one and two, and between sites three and four, \(q\) is the coupling between sites two and three, \(\pm i\delta\) and \(\pm i\gamma\) are gain and loss terms, as shown schematically in Fig. 2 (a). The above Hamiltonian can host both second- and fourth-order EPs, depending on values of \(p\), \(q\), \(\delta\), and \(\gamma\) in the parameter space. For the rest of the discussion, we set \(\gamma=1\). The EP-2 lies on a surface in the parameter space satisfying the following condition \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Model} & No. of & Total & \multirow{2}{*}{ \begin{tabular}{c} Performance \\ (\(R^{2}\) score) \\ \end{tabular} } \\ & hidden layers & trained & \\ & (neurons) & parameters & \\ \hline \hline Two-site & 2 (16, 4) & 121 & 0.968 \\ Four-site & 2 (32, 16) & 673 & 0.974 \\ \(N\) sites & 2 (12, 4) & 105 & 0.998 \\ (Hatano-Nelson) & 2 (12, 4) & 105 & 0.998 \\ \hline \hline \end{tabular} \end{table} Table 1: **Details of NN structures.** The details of the constructed networks are presented for the different models. We note that for a fixed number (two) of layers to get a good \(R^{2}\) score (\(\approx 0.95\%\)), one needs to increase the number of total trainable parameters by increasing the number of neurons per layer. This is because of the increasing complexity of models. Figure 1: **Training of NN for the two-site model.** (a) Illustration of the two-site gain and loss non-Hermitian model. Here \(J\) is the coupling between the two sites and \(\pm i\gamma\) denotes the gain and loss terms. (b) A NN with 2 hidden layers was constructed. The hidden layers consist of 16 and 4 neurons, respectively. The total number of trainable parameters is 121. The loss curve with epoch is plotted during training. The blue represents the training dataset and the yellow for validation. It is clear from the loss curve that our model is not over-fitted. (c) \(-\ln(|r|)\) is plotted for the test dataset in \(J-\gamma\) plane on a color scale. A color bar values tending to unity represent an EP and low values denote non-EP. (d) The corresponding predicted value from the NN is shown. We see a good agreement between the actual and predicted values in (c) and (d). \[p^{4}+\delta^{2}+2\delta p^{2}-\delta^{2}q^{2}=0. \tag{4}\] On the other hand, we obtain an EP-4 when both Equation 4 and the following Equation 5 are satisfied simultaneously, \[1+\delta^{2}-2p^{2}-q^{2}=0. \tag{5}\] As compared to the previous case, i.e., a classification between an EP and a non-EP, here we need to distinguish three types of points, EP-2, EP-4, and non-EPs. To do this, we have designed a new feature, which we term the _summed phase rigidity_, SPR. This is defined as \[\text{SPR}=\sum_{k}-\ln|r_{k}|/\text{max}(-\ln|r_{k}|), \tag{6}\] where \(k\) runs over all eigenstates of the given Hamiltonian. For an \(N\)-th order EP, \(\text{SPR}\approx N\) at the EP. The steps for constructing SPR are illustrated in Table 2. We trained the network to classify the points in the parameter space according to the SPR value. As such, we set the following cutoffs for the SPR - an SPR value of nearly zero (\(0<\text{SPR}<1\)) corresponds to the ordinary point (non-EP), SPR between 1 and 2 reflects a second order EP (EP-2), and SPR above three corresponds to a fourth order EP (EP-4). Note that in practical computations, SPR takes a value lower than \(N\) for an \(N\)-th order EP. This is because our generated data points are chosen randomly and phase rigidity depends on the detuning parameters, \(\delta z\), which take the system away from an EP. In general, \(r\varpropto\delta z^{1/N}\). Nevertheless, we show that SPR can classify the corresponding points with an outstanding accuracy, and can thus be a very useful training feature. We generated a data set with 50,000 points, such that it is a mixture of 5000 EP-2 and 5000 EP-4 points, with the rest being non-EPs. As before, we keep aside 10% of data points as test data, and the rest is used as the training data. The data points are generated by randomly picking a point from the \((p,q,\delta)\) parameter space (with \(\gamma=1\)). The conditions for EPs are established using Eq. 4 and Eq. 5. Next, we calculate SPR at these points as described above. So finally the features for each data point become \(p\), \(q\), \(\delta\) and SPR. Our trained NN consists of two hidden layers with 32 and 16 neurons per layer respectively. The total number of trainable parameters is 673. In our training with batch size 32 and 200 epochs, the loss curve in Fig. 2 (b) shows that the NN is not overfitted and converges rapidly. After training, we predict the SPR value for the test data set and plot it alongside the actual value in Fig. 2(c) and (d). We note the excellent match between the actual and predicted data. In particular, there is a clear distinction between EPs of different orders. Overall, in estimating the actual order of EPs in the test data set, we achieved an accuracy of 97.5%. _Hatano-Nelson model with variable sites-_ Next, we present the generalized approach to characterize the EPs based on the concept of SPR. We consider the paradigmatic Hatano-Nelson model [43], which has served as the inception ground for many of the central ideas of non-Hermitian physics. The Hamiltonian for \(L\) sites under open boundary conditions is given by the following matrix \begin{table} \begin{tabular}{c c c c c c c c} Point type & \(J\) & \(\gamma\) & \(-\ln|r_{1}|\) & \(-\ln|r_{2}|\) & \(\frac{\text{rescaled}}{-\ln|r_{1}|\) & \(-\ln|r_{2}|}\) & SPR \\ \hline \hline Non-EP & 0.3 & 0.7 & 0.10 & 0.10 & 0.0036 & 0.0036 & 0.007 \\ EP-2 & 0.5 & 0.5 & 27.34 & 27.34 & 0.99 & 0.99 & 1.98 \\ EP-2 & 0.3 & 0.3 & 27.52 & 27.52 & 1 & 1 & 2 \\ Non-EP & 0.5 & 0.8 & 0.25 & 0.25 & 0.009 &.009 & 0.018 \\ \end{tabular} \end{table} Table 2: **Constructing SPR for two site model.** To construct the SPR, we first calculate \(-\ln|r|\) at the chosen points and then rescale by dividing them by the maximum value. After the rescaling, the sum of all rescaled \(-\ln|r|\) gives the SPR. Note that SPR value is nearly the order of the EP at an EP and nearly zero far from the EPs. Figure 2: **Training of NN for the four-site model.** (a) Schematic of the four-site non-Hermitian model. Here \(p\) and \(q\) are the coupling constants between the sites, and \(\pm i\gamma\) and \(\pm i\delta\) denote the gain and loss terms. (b) An NN with 2 hidden layers was constructed. The hidden layers consist of 32 and 16 neurons, respectively. The total number of trainable parameters is 673. The loss curve with epoch during training is shown as the blue points for the training data set and as the yellow points for validation set. It is clear from the loss curve that our NN is not overfitted. (c) Histogram of test data points from the parameter space \((p,q,\delta,\gamma)\) according to the SPR value. Points near SPR=0 are non-EP, i.e., ordinary points. SPR between one and two represents the second-order EPs and SPR between three and four represents the fourth-order EPs. (d) Same as (c) but now instead the predicted value of SPR from the NN is plotted. We see good agreement between actual and predicted values from (c) and (d). \[H_{L}=\begin{pmatrix}0&t-\gamma/2&0&...&0\\ t+\gamma/2&0&t-\gamma/2&...&0\\ 0&t+\gamma/2&0&...&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\.&.&.&.\end{pmatrix}. \tag{7}\] Here \(t\pm\gamma/2\) denote non-reciprocal hopping terms, i.e., the unequal left and right hopping strengths. In this model, higher order EPs occur at \(t=\pm\gamma/2\), and the order depends on the number of sites, i.e., \(L\). Therefore, if a point is an EP in the \(t-\gamma\) parameter space, then the SPR value will be nearly \(L\), for an \(L\) site model. For the sake of simplicity, we change the variables as \(t_{1}=t-\gamma/2\) and \(t_{2}=t+\gamma/2\). In this \(t_{1}-t_{2}\) parameter space, EPs occur when one of \(t_{1}\) or \(t_{2}\) is nearly zero and other one is nearly one. Here we trained the Hatano-Nelson model with different sites. We have trained the network with \(L=3\) and \(L=7\). We generated the data by choosing random points in the \(t_{1}-t_{2}\) parameter space and calculating the SPR. For \(L=3\), we generated 30,000 points, which are a mixture of 5,000 points such that \(t_{1}\approx 1\) and \(t_{2}\approx 0\), 5,000 points such that \(t_{2}\approx 1\) and \(t_{1}\approx 0\) and the rest are such that \(t_{1}\approx t_{2}\). Similarly, for \(L=7\), 30,000 data points were generated with a similar distribution. Therefore, finally, we have 60,000 data points with different SPR values, i.e., nearly 3, 7 or 0 based on the values of \(t_{1}\), \(t_{2}\) and \(L\). Among the generated data, 10% was kept as test data and the rest is used to train the NN. We construct an NN with two hidden layers with 12 and 4 neurons, respectively. The loss curve in Fig. 3 (b) shows that NN is not over-fitted. After training, we predict the distribution of the test data set with respect to SPR value as shown in Fig. 3 (d), where \(6<\text{SPR}<7\) represents EPs of order seven, \(2<\text{SPR}<3\) denotes EPs of order three, and \(\text{SPR}<1\) are non-EP points. By comparing to the actual SPR distribution of the test data set in Fig. 3 (c), we find good agreement between actual and predicted SPR value distribution with an accuracy close to 99.9%. We, therefore, conclude that EPs of different orders can be successfully classified by using SPR. Whether our NN has learned the generalised property of SPR is a crucial question. In order to understand this, we ask the already trained NN to predict the EPs and their orders in models that were not included during training. Fig. 4 (a) shows the distribution of true SPR data from the \((t_{1},t_{2},L=4)\) parameter space for \(L=4\) site Hatano-Nelson model. Here, \(3<\text{SPR}<4\) represents EPs of order 4, and \(0<\text{SPR}<1\) are non-EPs. Now we feed test this data set of \(L=4\) sites to the already trained NN and the distribution of data with the predicted SPR is shown in Fig. 4 (b). Remarkably, we find that the NN, which was trained for \(L=3\) and \(L=7\) sites is able to identify the EPs as well as their order with an accuracy greater than 99.9%. Figure 3: **Training of NN for variable site Hatano-Nelson model.** (a) Schematic diagram of the Hatano-Nelson model with the left-right asymmetric hopping (\(t\pm\gamma/2\)). (b) A neural network with 2 hidden layers was constructed. The hidden layers consist of 12 and 4 neurons, respectively. The total number of trainable parameters is 105. The loss curve shown as a function of the epoch indicates that NN is not over-fitted or under-fitted. (c) Distribution of test data set with actual SPR values for \(L=3\) and \(L=7\) sites. (d) Distribution of the same data sets as (c) but with predicted SPR values from the network. We note the agreement between actual and predicted SPR values. Figure 4: **SPR prediction of higher-order EPs of un-trained Hatano-Nelson models.** (a) Actual data distribution for \(L=4\) site Hatano-Nelson model. Here, \(3<\)SPR\(<4\) represents fourth-order EPs and SPR below one denotes ordinary points. (b) The corresponding predictions from the NN, which was trained for \(L=3\) and \(L=7\) sites only. (c) Actual data set for a mixture of \(L=4\) and \(L=6\) sites HN models, where \(3<\text{SPR}<4\) denotes an EP of order 4 and \(5<\text{SPR}\)\(<6\) represents an EP of order 6. (d) Corresponding predicted values from the NN. We discover that the NN is able to detect the order of the EP even when there is a mixture of data for EPs of different orders. Moreover, the NN is capable of predicting the EPs and their orders for those cases which were not included in either the training or the test data sets. To further examine the robustness of our NN, we make the scenario more complicated by adding random points from \((t_{1},t_{2},L=6)\) parameter space to the existing \(L=4\) data set. The resulting distribution of the SPR values is shown in Fig. 4 (c), where \(0<\text{SPR}<1\) are non-EPs, \(3<\text{SPR}<4\) are EPs of order 4 and \(5<\text{SPR}<6\) represent the EPs of order 6. The corresponding distribution for the predicted SPR value is shown in Fig. 4 d. We note that our trained NN for \(L=3\) and \(L=7\) sites is not only capable of predicting the EPs and their orders for an untrained model, but it also has the ability to do so for a mixture of data from different non-Hermitian models with EPs of varying orders with an accuracy close to 99.9%. _Discussion and summary-_ Before summarizing, we note here a few points regarding our NN models. First, we have followed a top-up approach to train the models. We started with complicated models, i.e., a large number of hyperparameters (number of layers, number of neurons in each layer, number of epochs, batch sizes and other hyperparameters) and reduced the complexity until the performance decreases significantly. So, the NN models presented here should offer an optimal balance between computational cost and performance. Second, we selected Adam as the adaptive learning rate optimization and ReLU as the activation function. Third, we trained each model for 200 epochs with a batch size of 32. Finally, we emphasize that our ML models are actually regression models which predict the SPR value, and the specific range of SPR is then used to classify different orders of EPs. In summary, we have successfully demonstrated how ML techniques can be used to predict EPs and their orders in various models with outstanding accuracy. For the two-site model, we trained an NN to classify EPs and non-EPs. Next, we proposed a new feature - termed SPR - and used it to distinguish between EPs of different orders. We then generalized this procedure to the celebrated Hatano-Nelson model with variable sites. Remarkably, we found that our NN models were able to predict the true order of EPs with accuracy greater than 99% even in cases with EPs of orders completely unseen by the training data. Looking ahead, our work may open up interesting avenues for future explorations. Our techniques can be useful for studying the parametric dependence of EPs in higher dimensions, which can become quite intricate, especially in cases such as anisotropic EPs [44]. The behavior of EPs can differ based on the symmetries present in the system [45]. We envisage that generalizations of our framework may assist in characterizing nontrivial behavior in such scenarios. Additionally, EPs have intriguing connections to topological phases of non-Hermitian systems, making our method potentially useful for studying their topological properties. We hope our work motivates these promising developments. _Acknowledgments-_ We have used Python [46] and TensorFlow [47] for our computations, and we are grateful to the developers. M. A. R. would like to thank Sourav Mal for useful discussions, and would also like to thank Smoky, his cat, for being a calming presence when he was writing this paper. M. A. R. is supported by a graduate fellowship of the Indian Institute of Science. A. N. acknowledges a start-up grant from the Indian Institute of Science.
2304.07442
Learning To Optimize Quantum Neural Network Without Gradients
Quantum Machine Learning is an emerging sub-field in machine learning where one of the goals is to perform pattern recognition tasks by encoding data into quantum states. This extension from classical to quantum domain has been made possible due to the development of hybrid quantum-classical algorithms that allow a parameterized quantum circuit to be optimized using gradient based algorithms that run on a classical computer. The similarities in training of these hybrid algorithms and classical neural networks has further led to the development of Quantum Neural Networks (QNNs). However, in the current training regime for QNNs, the gradients w.r.t objective function have to be computed on the quantum device. This computation is highly non-scalable and is affected by hardware and sampling noise present in the current generation of quantum hardware. In this paper, we propose a training algorithm that does not rely on gradient information. Specifically, we introduce a novel meta-optimization algorithm that trains a \emph{meta-optimizer} network to output parameters for the quantum circuit such that the objective function is minimized. We empirically and theoretically show that we achieve a better quality minima in fewer circuit evaluations than existing gradient based algorithms on different datasets.
Ankit Kulshrestha, Xiaoyuan Liu, Hayato Ushijima-Mwesigwa, Ilya Safro
2023-04-15T01:09:12Z
http://arxiv.org/abs/2304.07442v1
# Learning To Optimize Quantum Neural Network Without Gradients ###### Abstract Quantum Machine Learning is an emerging subfield in machine learning where one of the goals is to perform pattern recognition tasks by encoding data into quantum states. This extension from classical to quantum domain has been made possible due to the development of hybrid quantum-classical algorithms that allow a parameterized quantum circuit to be optimized using gradient based algorithms that run on a classical computer. The similarities in training of these hybrid algorithms and classical neural networks has further led to the development of Quantum Neural Networks (QNNs). However, in the current training regime for QNNs, the gradients w.r.t objective function have to be computed on the quantum device. This computation is highly non-scalable and is affected by hardware and sampling noise present in the current generation of quantum hardware. In this paper, we propose a training algorithm that does not rely on gradient information. Specifically, we introduce a novel meta-optimization algorithm that trains a _meta-optimizer_ network to output parameters for the quantum circuit such that the objective function is minimized. We empirically and theoretically show that we achieve a better quality minima in fewer circuit evaluations than existing gradient based algorithms on different datasets. ## I Introduction Machine learning has evolved over time from solving small size pattern recognition problems to being able to capture latent structure in data and generalize to unseen data at scale. The state of machine learning is quickly approaching a point where classical computers would not be able to keep up with the ever increasing scale and complexity of data. On the other hand, quantum computing holds a theoretical promise of being able to scale beyond existing classical approaches. While the current state of quantum machines is still far from being practically competitive to the classical counterparts, the emergent field of Quantum Machine Learning (QML) is an important step towards the future in which quantum computers form the basis of many challenging computational tasks [1, 2]. The current bridge between classical and quantum algorithms is a class of algorithms called Variational Quantum Algorithms (VQA)s [3]. These algorithms are concerned with finding the optimal set of parameters for a Variational Quantum Circuit (VQC) to minimize an objective function. The parameters are optimized using gradient descent which runs on a classical computer. Since the training procedure is very similar to modern Deep Neural Networks (DNNs), quantum variants of several existing DNN architectures have been proposed [4, 5, 6, 7]. The classical and quantum neural networks differ in one key aspect: the data has to be encoded as quantum states before it can be processed by a QNN. This encoding can be seen as a mapping to a high dimensional Hilbert space where the data is separable. It is thus entirely possible that with a powerful quantum computer, we will be able to capture a much richer representation of data and consequently outperform DNNs which is the motivation and hope behind several QML algorithms. The VQCs are deployed on quantum devices that support operations on quantum states. The current generation of devices, called Noisy Intermediate Scale Quantum (NISQ) devices, is quite limited due to a high presence of gate noise, inability to scale to beyond a few hundred qubits, and lack of reliable error mitigation and detection algorithms [3]. These bottlenecks directly affect the performance of QNNs and hence motivate better algorithms to train them. _The need for gradient-free optimization algorithms:_ The bottlenecks in current generation of quantum devices are not the only reasons that motivate a better optimization algorithm. Other factors also make the case for a better optimization algorithm more compelling. For instance, for a QNN running on a quantum device, the gradients are computed using the parameter shift rule [8]. This method scales as \(O(N)\) where \(N\) is the number of parameters of the QNN. A full forward-backward pass then scales as \(O(N^{2})\). Clearly, if the QNN is to be scaled to large problem instances (e.g., Imagenet [9] classification) the quadratic cost of computation must be improved. Another contributing reason is that in the current training regime, the QNN is treated as a black-box from the perspective of the optimizer. While conventional optimization algorithms like RMSProp [10], Adam [11], SGD etc. still work for QNNs, it is possible that when QNNs are scaled to large problem instances, hand-designed optimization rules may not be able to fully capture the complexities of the probabilistic nature of QNNs. These reasons, coupled with inherent issues in NISQ devices motivate the development of a efficient learned optimizers. More crucially, in order to be broadly applicable we demand that the optimizer makes as few calls to the quantum circuit as possible and estimate the direction of optimization in a gradient free manner. One way of designing a new learning rule is to _learn_ it during training for a fixed task and dataset. If \(\mathbf{\theta}^{t}\in\mathbb{R}^{N}\) are the parameters of QNN at timestep \(t\) and \(C(\mathbf{\theta})\) is a cost function we are interested in minimizing, then a learned update rule is of the form \(\mathbf{\theta}^{t+1}=\mathcal{R}_{\mathbf{\Phi}}(\mathbf{\theta}^{t},\nabla_{\theta}C( \mathbf{\theta}^{t}))\) where \(\nabla_{\theta}C(\mathbf{\theta}^{t})\) is the gradient of QNN cost w.r.t \(\mathbf{\theta}^{t}\). Here, \(\mathcal{R}_{\mathbf{\Phi}}\) is a DNN parameterized by \(\mathbf{\Phi}\in\mathbb{R}^{M}\) where \(M\) is the number of parameters in the DNN. \(\mathcal{R}_{\mathbf{\Phi}}\) is trained using a meta-loss function \(\mathcal{L}(\mathbf{\Phi})\). For any given dataset and QNN, we are interested in finding an optimal set of meta-parameters \(\mathbf{\Phi}^{*}\) by minimizing \(\mathcal{L}(\mathbf{\Phi})\) such that the QNN cost \(C(\mathbf{\theta})\) is minimized. For the rest of the paper, we refer to \(\mathcal{R}_{\mathbf{\Phi}}\) as a _meta-optimizer_ and the problem of minimizing \(\mathcal{L}(\mathbf{\Phi})\) as meta-optimization. Meta-optimization approaches are well studied in the deep learning literature. Andrychowicz _et al._[12] introduced the idea of optimizing a deep neural network using an LSTM based meta-optimizer that accepted \((\mathbf{\theta}^{t},\nabla_{\theta}C(\mathbf{\theta}^{t}))\) as inputs. Li and Malik [13, 14] explore the idea from the reinforcement learning perspective where they leverage the LSTM based optimizer to learn a _policy_ for predicting the next set of parameters. Metz _et al._[15] propose an alternative algorithm for fast training of meta-optimizers that generalizes better than the gradient descent training. These findings have been inherited for training variational quantum circuits (VQCs) in the works of Wilson _et al._[16] and Verdon _et al._[17]. In the former work, the authors build on the algorithm in [12] for a VQC and leverage \(\nabla_{\theta}C(\mathbf{\theta})\) as an input feature for the meta-optimizer. In the latter work, the authors propose using the meta-optimization framework for learning initial parameters using a recurrent neural network and then initializing a VQC with these learned parameters. The VQC is then proposed to be fine-tuned by a regular optimizer until a desired accuracy level is reached. In all the aforementioned works, a common denominator is the reliance over the gradient as input features to the meta-optimizer (except [17] which uses \(C(\mathbf{\theta})\)). Given that computing gradients for QNNs is not cheap, the challenge is to develop a meta optimizer that can learn a proper update rule _without_ any explicit gradient computation step. **Our Contribution** We address this critical challenge and propose a novel meta-optimization algorithm that learns to train QNNs without relying on any gradient information. More specifically, we make the following contributions: (1) We design a novel algorithm for training QNNs using a meta-optimizer in a way that does not involve any computation or approximation of \(\nabla_{\theta}C(\mathbf{\theta})\) on a quantum device. We show that using better features and training schedule can result in a meta-optimizer which is competitive with gradient based optimizers that are currently used for training QNNs. (2) We theoretically and empirically verify that our algorithm provides a significant speedup in training time over conventional gradient based optimization algorithms. (3) We empirically demonstrate that our algorithm achieves a minima which is comparable to that attained by the conventional first-order methods. Moreover, we show that with right initialization our algorithm can outperform classical gradient based algorithms with the same initialization. To the best of our knowledge, this is a first meta-optimization algorithm for training QNNs that does not rely on computing gradients. We advocate that our algorithm can serve as blueprint for gradient-free meta-optimization algorithms. ## II Quantum Neural Networks For completeness, we first introduce the idea of Quantum Neural Networks (QNNs) [18, 4]. Given a dataset consisting of \(m\) samples \(\mathcal{D}=\{\mathbf{x}_{i},y_{i}\}_{i=1}^{m}\), where \(\mathbf{x}_{i}\in\mathbb{R}^{d}\) are the data points with corresponding labels \(y_{i}\)1. A quantum neural network \(f_{\theta}:\mathbb{R}^{d}\mapsto\mathbb{R}\) is a variational quantum circuit that learns to assign \(y\) to \(\mathbf{x}\in\mathbb{R}^{d}\). A parameterized quantum circuit implementing a quantum neural network can be written as a product of \(n\) parametrized unitary matrices \(\mathbf{U}_{i}(\theta_{i})\), \(\mathbf{U}(\mathbf{\theta})=\prod_{i=1}^{n}\mathbf{U}_{i}(\theta_{i})\). Before the data can be processed by the VQC, an encoding circuit is applied to prepare the input state \(|\psi\rangle=\mathbf{U}_{0}(\mathbf{x})|0\rangle\), where \(\mathbf{U}_{0}(\mathbf{x})=e^{-i\mathbf{x}\mathbf{G}}\) is a unitary that encodes each component of a single \(d\) dimensional data vector into a quantum state consisting of \(q=\log(d)\) qubits. Here, \(\mathbf{G}\) is a gate generating Hermitian matrix. Overall we can describe the QNN \(f_{\theta}(\mathbf{x},\mathbf{\theta})\) as: Footnote 1: The index subscript will be omitted where it is clear from the context. \[f_{\theta}(\mathbf{x},\mathbf{\theta})=\langle 0|\mathbf{U}_{0}^{\dagger}(\mathbf{x})\mathbf{U}^{ \dagger}(\mathbf{\theta})\mathbf{\hat{O}}\mathbf{U}(\mathbf{\theta})\mathbf{U}_{0}(\mathbf{x})|0\rangle, \tag{1}\] where \(\mathbf{\hat{O}}\) is a quantum observable that maps the output quantum state to a scalar number and \(\mathbf{U}^{\dagger}\) are the complex conjugate of the unitary matrix \(\mathbf{U}\). We select the minimum squared error as the cost function of our choice. Averaged over all \(m\) points of the dataset, we express it as: \[C(\mathbf{\theta})=\frac{1}{m}\sum_{i=1}^{m}||y_{i}-f_{\theta}(\mathbf{x}_{i},\mathbf{ \theta})||_{2}^{2} \tag{2}\] The objective is to find \(\mathbf{\theta}^{*}=\operatorname*{argmin}_{\theta}C(\mathbf{\theta})\). In this work we shall assume that the QNN is minimizing the cost function, \(C(\mathbf{\theta})\), in Equation (2). ## III Meta-Optimization Framework We now discuss the meta-optimization framework in more detail. For simplicity, we adopt a similar nomenclature as in [12] and refer to the QNN as the _optimizee_ network and the meta-optimizer as the _optimizer_ network. In the meta-optimization framework, the task of the optimizee network is to evaluate the parameters suggested by the optimizer network. In turn, the optimization network accepts some input meta-features and performs an internal computation to generate new parameters for the optimizee network while adjusting its own parameters. In particular, if \(\mathbf{\theta}^{t}\) are the optimizee parameters at time-step \(t\) during training, then the next parameters are discovered following [12]: \[\mathbf{\theta}^{t+1} =\mathbf{\theta}^{t}+\mathbf{g}^{t}\,, \tag{3}\] \[\begin{bmatrix}\mathbf{g}^{t}\\ \mathbf{h}^{t+1}\end{bmatrix} =\mathcal{R}_{\mathbf{\Phi}}(\mathcal{I}(\mathbf{\theta}^{t}),\mathbf{h}^{t})\] Here we specify \(\mathcal{R}_{\mathbf{\Phi}}\) to be an instance of LSTM that accepts a hidden state \(\mathbf{h}^{t}\) and the meta-parameters \(\mathbf{\Phi}\) and outputs the update \(\mathbf{g}^{t}\). The LSTM accepts \(\mathcal{I}(\mathbf{\theta}^{t})\) which we define to be a function that combines information from the training process and presents it as input features to the optimizer network. For instance, in [12] and similar works [14, 16], \(\mathcal{I}(\mathbf{\theta})=\nabla_{\theta}C(\mathbf{\theta})\). The general form of meta-loss function tries to minimize the loss over a finite horizon window of size \(T\). In this work, we follow a meta-loss function inspired by the deep learning literature: \[\mathcal{L}(\mathbf{\Phi})=\sum_{i=1}^{T}w_{t}C(\mathbf{\theta}^{t}), \tag{4}\] where \(\mathbf{W}^{m}=[w_{1},w_{2},\dots,w_{T}]\) are the weights over the cost function evaluations at time step \(t\). A uniform weighting strategy sets all weights to be equal. The _magnitude_ of the weights is a matter of choice and the decision is made based on the dataset that is being used. The optimization of \(\mathbf{w}\) in a data driven manner is out of scope of this paper. In general, we suggest that a line of work based on hyper-parameter optimization [19] can be developed and used to dynamically adjust the weights for any given data. ## IV Our Algorithm We will now discuss the main components of our algorithm. Although, we generally follow the meta-optimization framework discussed above, the constraints induced by the need for gradient free optimization lead us to introduce the algorithmic tools that have not been proposed in literature in the meta-optimization context for quantum machine learning. **Input Preprocessing**: In earlier works, the input function involved passing some gradient information from the optimizee network to the optimizer network. Since we are constrained to not use gradient information, we utilize the tuple \((\mathbf{\theta}^{t},\Delta C(\mathbf{\theta}))\) as inputs where \(\Delta C(\mathbf{\theta})\) is a _pseudo-gradient_ that computes the difference between the current and previous cost function evaluation, i.e., \(\Delta C(\mathbf{\theta})=C(\mathbf{\theta}^{t-1})-C(\mathbf{\theta}^{t})\). These inputs are concatenated in a single vector and the input function \(\mathcal{I}(\mathbf{\theta})\) is prepared as: \[\mathcal{I}(\mathbf{\theta})=\frac{\mathcal{P}([\mathbf{\theta}^{t};\Delta C(\mathbf{ \theta}^{t})])}{p}. \tag{5}\] Here we use a non-linear function \(\mathcal{P}:\mathbb{R}^{d}\mapsto[0,1]\) that normalizes the input values to between 0 and 1. The value \(p\) controls the strength of normalization. In our experiments we use \(\mathcal{P}(\mathbf{x})=e^{\mathbf{x}}\) with \(\mathbf{x}\) being the input vector for the LSTM. Fig. 1: An overview of our proposed meta-learning algorithm. The blue arrows indicate key inputs to the QNN and the LSTM and the red arrows indicate the output from LSTM. Best viewed in color. See Section IV for more details. We empirically found \(p=50\) to work best for the datasets considered in this study. In a future work, we would like to explore the possibility of learning an adaptive normalization strength that is derived from observing history of updates during training. **Non Linear Parameter Updates**: In Equation (3), we defined a generic update rule in the meta-optimization framework. That update rule simply adds the output of LSTM to previously obtained parameters. In this work, we introduce a new update rule of the form: \[\mathbf{\theta}^{t+1}=\mathbf{\theta}^{t}+\alpha\cdot\sigma(\mathbf{\Omega}_{u}^{t}), \tag{6}\] where \(\mathbf{\Omega}_{u}^{t}\) are the updated parameters obtained from the LSTM. Compared to Equation (3), we have changed \(\mathbf{g}^{t}=\mathbf{\Omega}_{u}^{t}\) to \(\mathbf{g}^{t}=\alpha\cdot\sigma(\mathbf{\Omega}_{u}^{t})\) where \(\alpha\) is a hyper parameter and \(\sigma\) is a non-linear activation function (tanh in our implementation). This update rule applies a non-linear activation to the LSTM parameters and controls the strength of the update using \(\alpha\). Informally, \(\alpha\) can be interpreted as a learning rate which helps the quantum learner in adjusting the new parameters. Since there is no direct gradient information, a destructive update (i.e., \(\mathbf{\theta}^{t}=\mathbf{\Omega}_{u}^{t}\)) would make the optimizee's parameters prone to oscillation due to the noisy output from LSTM. Additionally, the raw parameter updates from LSTM are unbounded in \(\mathbb{R}\). A clipping function (e.g., tanh) that clips the values between certain maximum and minimum values, leads to more stable updates which consequently yield a smoother parameter update. **Replay Buffer**: In earlier works, the LSTM network was able to compute a good descent direction based on gradient information provided to it during training. Even in such works as [17], the LSTM network was only used to learn the initial parameters for optimizing a QNN and then the conventional gradient descent method was used. _However, we consider a harder case where no gradient information is available_. It is already known that a short horizon bias problem exists for learned optimizers for classical neural networks [20]. In the case of optimizing quantum networks without using any gradient information, this problem becomes even harder to overcome. In meta-optimization a short horizon bias occurs when the meta-optimizer becomes biased to providing updates akin to taking short steps towards minima. These updates do not cause a significant overall decrease in optimizee's cost function. This effect is more pronounced when we operate in parameter space as opposed to gradient space since the LSTM network does not get any information about the curvature of the optimization surface. Running optimization in a finite horizon window in parameter space can then cause LSTM to suggest incorrect updates to the optimizee network. In the specific case of QNNs, the parameters correspond to a rotation of given input state about a particular axis. An incorrect update can very easily cause the quantum state to be rotated incorrectly and therefore lead to an increase in the value of the objective function we're interested in minimizing. To overcome this issue, we develop a technique inspired by work in reinforcement learning [21]. Throughout training, we keep track of past history of parameters and their corresponding cost function values in a "replay buffer". At the start of training we instantiate a double ended queue dubbed as a _replay-buffer_\(\mathcal{B}\) of a finite capacity \(R\). For meta-iteration \(t=1\dots T\) we observe a history of parameters \(\mathbf{\theta}^{t}\) and the corresponding cost \(C(\mathbf{\theta}^{t})\). If \(C(\mathbf{\theta}^{t+1})<C(\mathbf{\theta}^{t})\) then we add the state \(s=[\mathbf{\theta}^{t+1},C(\mathbf{\theta}^{t+1}),\Delta C(\mathbf{\theta}),\mathbf{h}^{t+1}]\) to the replay buffer. Once the meta-iteration ends and \(\mathcal{L}(\mathbf{\Phi})\) is computed, if the QNN cost function is diverging, we seed the parameters for the next meta-iteration by performing the following update: \[\mathbf{\theta}^{T+1} =\tau\cdot\mathbf{\theta}^{T}+(1-\tau)\cdot\mathbf{\theta}_{s}\] \[\tau =\frac{\tau}{(1+\zeta\cdot t)}. \tag{7}\] where \(\zeta\) is a decay factor that adjusts the blending coefficient \(\tau\) as the training progresses and \(\mathbf{\theta}^{T}\) are the optimizee parameters at the end of a previous unrolled meta-iteration loop. \(\mathbf{\theta}_{s}\) are the sampled parameters from the replay buffer that are chosen according to a fixed policy function \(\pi(\mathcal{B})\). In our work this policy corresponds to choosing the parameters that lead to most cost decrease over the previous unroll iterations. We expect that future works in this directions will be able to learn \(\pi(\mathcal{B})\) along with parameters. We set \(\tau\) to be \(0.9\) to put more emphasis on the current parameters than sampled parameters. As training progresses and we observe divergent behavior, we decrease \(\tau\) according to Equation 7 with \(\zeta=0.99\) and \(t\) indicating the overall global step of optimization iteration. **Overall Algorithm**: The overall flow of the algorithm is shown in Figure 1. At a given timestep \(t\), we expect the cost function value \(C(\mathbf{\theta}^{t-2})\) and the parameters \(\mathbf{\theta}^{t-1}\). We then compute the cost \(C(\mathbf{\theta}^{t-1})\) using Equation (2). A cost delta is computed \(\Delta C(\mathbf{\theta})=C(\mathbf{\theta}^{t-2})-C(\mathbf{\theta}^{t-1})\), which is then used as a decision metric and a feature in the input. In the former role, if \(\Delta C(\mathbf{\theta})<0\) then a sample (described earlier) is committed to the replay buffer \(\mathcal{B}\). Then, depending on the availability of elements in \(\mathcal{B}\), the parameters are either sampled or retained from previous step. An input feature is then pre-processed using Equation (5). The updated parameters \(\mathbf{\Omega}_{u}^{t}\) are then obtained from the LSTM and \(\mathbf{\theta}^{t}\) is computed using Equation (6). In the initial conditions, we set \(\Delta C(\mathbf{\theta})=1.0\). ## V Theoretical Performance We now analyze the theoretical performance gain that our algorithm can provide in the context of parameterized quantum circuits. We note that our algorithm is not predicated on an existence of an efficient data structure like QRAM [22], rather the performance gains are expected due to non-computation of gradient on quantum devices. **Theorem V.1**.: _Consider a variational quantum circuit consisting of \(q\) qubits, \(L\) layers and \(k\) single qubit gates per layer. Let \(\mathcal{A}\) be an optimization algorithm running on a classical device helping the circuit minimize a cost function \(C(\theta)\). Then:_ * _If_ \(\mathcal{A}\) _is a gradient-dependent algorithm the the total time for one full pass (forward and backward) takes_ \(O(2(qLk)^{2}\delta t_{f})\)_, where_ \(\delta t_{f}\) _is the time for a forward pass._ * _If_ \(\mathcal{A}\) _does not require a gradient, then the total time for one full pass takes_ \(O(2(qLk)\delta t_{f})\)_._ Proof.: Consider an instance of gradient dependent algorithm \(\mathcal{A}^{g}\). To update the parameters for the next step, it requires \(\nabla_{\theta}C(\theta)\). For a circuit running on NISQ computer, currently the only method to compute gradients is to use the parameter shift method [8]. The expected gradient w.r.t single component \(\theta_{i}\) is given as: \[\nabla_{\theta_{i}}C(x;\theta_{i})=\langle\psi_{x}|\nabla_{\theta_{i}}\mathcal{ F}_{\theta_{i}}(\boldsymbol{\hat{O}})|\psi_{x}\rangle, \tag{8}\] where \(|\psi_{x}\rangle=\boldsymbol{U}_{0}(x)|0\rangle\). The quantity \(\nabla_{\theta_{i}}\mathcal{F}_{\theta_{i}}(\boldsymbol{\hat{O}})\) is the gradient w.r.t to the \(i^{th}\) component of the parameter vector. The parameter shift rule states: \[\begin{split}\nabla_{\theta_{i}}\mathcal{F}_{\theta_{i}}( \boldsymbol{\hat{O}})&=c[\mathcal{F}_{\theta_{i}+s}(\boldsymbol {\hat{O}})-\mathcal{F}_{\theta_{i}-s}(\hat{O}))]\\ \mathcal{F}_{\theta_{i}}(\boldsymbol{\hat{O}})&= \boldsymbol{U}^{\dagger}(\theta_{i})\boldsymbol{\hat{O}}\boldsymbol{U}( \theta_{i}).\end{split} \tag{9}\] where \(c\) is a scaling constant (\(c=0.5\)) and s is a constant shift angle about which the gradient is computed. To successfully evaluate \(\nabla_{\theta_{i}}C(\theta)\) one needs two evaluations of the quantum circuit with shifted parameters. Consequently, the time for one gradient computation over the entire quantum circuit is \(O(2qlK)\). Then, for a full pass (i.e. computing the cost and gradient), the computation time for \(\mathcal{A}^{g}\) scales as \(O(2(qlK)^{2}\delta t_{f})\). In contrast, a gradient free method (like ours), does not incur the penalty of computing the gradient and thus the overall computation time scales as \(O(2(qlk)\delta t_{f})\). The reduction of a quadratic run-time to a linear run-time is significant since this will allow a QNN to scale to a larger number of data points. Although, our method does incur a storage overhead of \(O(R)\), for all practical scenarios \(O(R)\ll O(qlK)\). ## VI Numerical Experiments In this section we show numerical experiments that demonstrate the effectiveness of our method. The goal of our experiments is twofold. First, we wish to empiricially demonstrate that using a LSTM based meta-optimizer can lead to better convergence with lower number of circuit executions than existing gradient based or gradient free methods. Second, we wish to show that our method is more useful than gradient based methods in realistic scenarios where the number of shots on a quantum device are limited. In this latter case we demonstrate that the LSTM based optimizer is able to evolve a significantly better optimization trajectory than a gradient based algorithm. ### _Experiments on Machine Learning Datasets_ We first present our experiments on a pattern classification task using an instance of a layered ansatz as shown in Figure 3. The first layer in the ansatz embeds the input data \(\boldsymbol{x}\) into the corresponding angles of a RX gate, where \(RX(x_{j})=e^{-ix_{j}\sigma_{x}/2}\) and \(\sigma_{x}\) is the Pauli-X matrix. The subsequent layers apply a parameterized RY rotation and are entangled using the CZ gate. Since the number of input qubits is equal to the dimensionality of data, we consider three datasets with increasing dimensionality - Gaussian, Spheres and Iris. The Gaussian dataset is a synthetic dataset where two-dimensional clusters are instantiated by drawing samples from a multivariate Gaussian distribution of given parameters. Similarly, the spheres dataset is a collection of 3-d points which form concentric spheres with one sphere enveloping the other. Finally, we consider a truncated Iris dataset that consists of only two classes and four features that describe a particular species. In all these datasets, the labels are encoded into \(\{-1,+1\}\) and the optimization task is to find variational parameters that minimize Equation 2 with Pauli-Z gate being the observable. In the experiments we do not apply any pre-processing or post processing on the input data. We benchmark our LSTM optimizer against two commonly used gradient based algorithms - ADAM [11] and Gradient Descent. For the LSTM optimizer, the ansatz is run for 50 iterations while for gradient based algorithms we run it for 25 iterations. We then profile the cost obtained by the corresponding algorithms with the number of circuit evaluations. The learning rate for gradient based algorithms is set as \(1e^{-2}\) and \(\alpha=0.1\) for the LSTM optimizer. Figure 2 shows the results of our experiments on the three datasets. The top (bottom) rows show the performance without(with) replay buffer sampling. In all cases, the LSTM optimizer is able to find parameters that minimize the cost function in far fewer circuit evaluations than gradient based optimizers. This result is an empirical validation of Theorem V.1 and intuitively makes sense since parameter shift rules evaluate the same circuit twice per parameter component to estimate the gradient at a given timestep. Figure 1(a) shows a phenomenon unique to LSTM based optimizers. After finding a good descent direction, the LSTM based optimizers tend to over optimize and thus oscillate about some fixed point. This "sinusoidal" oscillation is hard to control in optimization problems since it's not easy to guess the stationary points in multivariate objective functions. Our replay buffer sampling strategy is aimed to control this phenomenon in a more principled manner and it's effects are visible in Figure 1(d) where the parameter mixing and decay lead to a more stabler convergence. The effects are also visible in Figures 1(e) and Figures 1(f) where the cost function does not diverge after reaching a minimum point. ### _Experiments with Limited Shots_ In the experiments on the machine learning datasets we simulated the performance of the optimization algorithms assuming that number of available shots on the device were infinite. This assumption is however not realistic since most NISQ devices have limited number of shots for measurement. Hence, we measure the performance of the LSTM optimizer in the setting where the number of shots is highly limited. Figure 4 shows the results of our experiments on a simulated quantum device with only 100 shots for measurement. We _a-priori_ expect the stochasticity in the sampling to manifest in the optimization process and affect the overall quality of minima for both gradient free and gradient based optimizers. In the former case, the noisy measurement of cost functions with given parameter values make it hard for the LSTM to suggest "good" updates and in the latter case the noisy measurements of cost functions are coupled with noisy estimation of gradient. The noisy estimation of gradient in this case is not equivalent to stochastic averaging over mini-batches in classical deep learning algorithms and hence does not guarantee a better convergence rate. In the figure, we see that the LSTM based optimizer is able to obtain a lower minima in atleast two datasets and is able to find a minima faster than gradient based algorithms in all datasets. This demonstrates that stochastic measurement of gradient can lead to slower and sub-optimal coverage. We further demonstrate the performance of the QNN on these datasets with 1000 shots in the appendix. To conclude, in a limited shot setting a gradient free algorithm like ours can be useful as a drop in replacement for a gradient based optimization algorithm. ### _Comparison Against Other Gradient Free Approaches_ We now benchmark our algorithm against another well known gradient-free algorithm - Simultaneous Perturbation Stochastic Approximation (SPSA) algorithm [23] which approximates gradients by taking finite differences between perturbed parameter vectors. The perturbations are generated stochastically. We perform experiments with a four qubit, five layer variational circuit whose minimum parameters correspond to the minima of the following cost function: \[C(\boldsymbol{\theta})=\bigotimes_{i=1}^{n}\langle\psi(\boldsymbol{\theta})| \sigma_{z}(i)|\psi(\boldsymbol{\theta})\rangle \tag{10}\] Where \(|\psi(\boldsymbol{\theta})\rangle\) is a variational circuit consisting of strongly entangling layers i.e. rotation gates with all-to-all entanglement and \(\sigma_{z}(i)\) is the Pauli-Z matrix acting on the \(i^{th}\) qubit as the observable. The results of our experiments are shown in Figure 5. We can see that the LSTM Optimizer is able to find a better quality minima than SPSA as well. Moreover, SPSA makes an extra call to the circuit per optimization step making Fig. 3: The quantum circuit used in our study. The \(R_{Y}\) gates are parameterized by variational parameters. The \(R_{X}\) gates applied to each qubit constitute \(\boldsymbol{U}_{0}\) in Equation 1 Fig. 2: Performance of LSTM-based meta-optimizer on the binary classification task for three datasets. All parameters are initialized using the normal distribution. The top row shows the performance of the algorithms without any replay buffer sampling and the bottom row shows the performance with replay buffer sampling. In both cases, the LSTM based optimizer is able to achieve a lower cost in significantly fewer circuit evaluations. it slightly more expensive than LSTM optimizer. We conclude that the our gradient free algorithm can help VQCs scale to larger and more complex problem instances in the future. ### _Time Profiling Results_ In our earlier experiments we noticed that LSTM optimizer converges to a minima in fewer circuit evaluations than other methods. This implied that our method could be used to scale to data with larger number of points. To test this hypothesis, we perform experiments for a pattern classification task by generating \(N\) samples from a Gaussian distribution of finite mean and covariance. We benchmarked the time per epoch (i.e. time it takes to go through an entire dataset) for our method against Adam, Gradient Descent Optimizer and RMSProp [10]. The results of our experiment are shown in Figure 6. We vary the sample size \(N=\{100,200,400,600,800\}\) and measure the scaling of the time per epoch it takes for the optimizers. It is clear from the figure that the LSTM optimizer is able to scale to larger data instances without a significant increase in the time per epoch. In fact, our method is nearly _three_ times faster than any gradient based method on the same dataset. This result is a positive indication that development of novel gradient free methods is a novel research direction and improvements may lead to VQCs being able to scale to handle data at the scale which is currently being handled by classical algorithms. ## VII Related Work Meta optimization is an actively researched area in the field of deep learning. Many different meta-optimization algorithms have been proposed. Notably, [12] propose an LSTM based meta-optimizer that accepts the cost function gradient and the previous time parameters as inputs to a LSTM network and recover new parameters in the form of LSTM output while Ravi _et al._[24] embed parameters of neural network into the cell state of the LSTM and the gradient as the state of the candidate cell state. The output is then a non-linear combination of these two embeddings in the LSTMcell. A reinforcement learning perspective is considered by [13, 25] where a guided policy search algorithm is used to find new parameters given information from previous time steps (over a finite horizon). Fig. 4: Results on experiments with limited number of shots. In the limited shot setting, the LSTM based optimizer is able to minimize the cost function more effectively than gradient based approaches. Fig. 5: Performance of LSTM based optimizer benchmarked against the SPSA algorithm. For the same cost function, SPSA makes an extra call to the quantum circuit which results in more number of circuit evaluations. LSTM based optimizer outperforms SPSA in terms of cost function minimum. Fig. 6: Time profiling results for different sizes of Gaussian datasets. Another direction in meta-optimization is considered by [26] in the form of neural architecture search over the input space formed by the symbols in various update rules. They evaluate the viability of different optimization algorithms resulting from different combinations of the symbols and analyze the efficacy of the best performing combination. In the quantum case, there has been sparse but consistent work towards meta-optimization as well. The motivation for these works is towards exploring solutions to problems in quantum chemistry (e.g. finding the lowest energy Hamiltonian for a given system) and graph optimization (e.g. finding the parameters corresponding to the max-cut problem in QAOA). In [17], the authors propose a strategy of learning the best initial parameters using the LSTM setup of [12]. These initial parameters are then used in a quantum circuit to avoid barren plateaus and are optimized using conventional gradient based algorithms. Wilson _et al._[16] propose a similar setup to [12] and utilize similar inputs to the LSTM based meta-optimizer. Their assumption is based on the cheapness of gradient evaluation on the quantum circuit, which unfortunately for the NISQ devices is not the case. Another work by [27] proposes a reinforcement learning based approach in a Quantum Approximate Optimization Algorithm (QAOA) setting by trying to learn a policy that suggests optimal parameters for a max-cut problem with two clusters and SVM/SVR for predicting the circuit depth [28]. Their proposed method leverages PPO [29] algorithm for estimating the optimal policy. Our work can also be interpreted as a variant of reinforcement learning method where we learn the policy directly instead of just suggesting the mean of a Gaussian policy. Second, our method makes no presumption on the _size_ of the action space unlike this work where the action space is limited to just two parameters. To the best of our knowledge, our work is the first to consider a gradient free meta-optimization algorithm with significantly different inputs and update strategies. Table I summarizes the existing meta-optimization methods in quantum computing and highlights the differences from our method. ## VIII Conclusion We have proposed a gradient free meta optimization algorithm for quantum neural networks that can potentially scale up to larger dataset sizes in a reasonable amount of time. Experiments on different datasets show that our algorithm has a comparable quality to classical optimization algorithms while significantly outperforming them in terms of computation time. We believe that our method can be useful in exploring the answer to several open questions in the theory of variational quantum algorithms. For instance, the barren plateau problem [30] is a notorious problem that occurs in randomly parametrized circuits even when their depth is shallow. It has been shown that initializing distributions of parameters play a key role in preventing barren plateaus [31]. It would be interesting to explore if meta-optimizers can suggest parameters that prevent occurrence of barren plateaus. In a future work, we would also like to study the generalization performance of QNNs when trained by meta-optimizers. ## IX Data Availability Data and code are available upon reasonable request from the authors.
2310.12846
Physical Information Neural Networks for Solving High-index Differential-algebraic Equation Systems Based on Radau Methods
As is well known, differential algebraic equations (DAEs), which are able to describe dynamic changes and underlying constraints, have been widely applied in engineering fields such as fluid dynamics, multi-body dynamics, mechanical systems and control theory. In practical physical modeling within these domains, the systems often generate high-index DAEs. Classical implicit numerical methods typically result in varying order reduction of numerical accuracy when solving high-index systems.~Recently, the physics-informed neural network (PINN) has gained attention for solving DAE systems. However, it faces challenges like the inability to directly solve high-index systems, lower predictive accuracy, and weaker generalization capabilities. In this paper, we propose a PINN computational framework, combined Radau IIA numerical method with a neural network structure via the attention mechanisms, to directly solve high-index DAEs. Furthermore, we employ a domain decomposition strategy to enhance solution accuracy. We conduct numerical experiments with two classical high-index systems as illustrative examples, investigating how different orders of the Radau IIA method affect the accuracy of neural network solutions. The experimental results demonstrate that the PINN based on a 5th-order Radau IIA method achieves the highest level of system accuracy. Specifically, the absolute errors for all differential variables remains as low as $10^{-6}$, and the absolute errors for algebraic variables is maintained at $10^{-5}$, surpassing the results found in existing literature. Therefore, our method exhibits excellent computational accuracy and strong generalization capabilities, providing a feasible approach for the high-precision solution of larger-scale DAEs with higher indices or challenging high-dimensional partial differential algebraic equation systems.
Jiasheng Chen, Juan Tang, Ming Yan, Shuai Lai, Kun Liang, Jianguang Lu, Wenqiang Yang
2023-10-19T15:57:10Z
http://arxiv.org/abs/2310.12846v1
Physical Information Neural Networks for Solving High-index Differential-algebraic Equation Systems Based on Radau Methods ###### Abstract As is well known, differential algebraic equations (DAEs), which are able to describe dynamic changes and underlying constraints, have been widely applied in engineering fields such as fluid dynamics, multi-body dynamics, mechanical systems and control theory. In practical physical modeling within these domains, the systems often generate high-index DAEs. Classical implicit numerical methods typically result in varying order reduction of numerical accuracy when solving high-index systems. Recently, the physics-informed neural network (PINN) has gained attention for solving DAE systems. However, it faces challenges like the inability to directly solve high-index systems, lower predictive accuracy, and weaker generalization capabilities. In this paper, we propose a PINN computational framework, combined Radau IIA numerical method with a neural network structure via the attention mechanisms, to directly solve high-index DAEs. Furthermore, we employ a domain decomposition strategy to enhance solution accuracy. We conduct numerical experiments with two classical high-index systems as illustrative examples, investigating how different orders of the Radau IIA method affect the accuracy of neural network solutions. The experimental results demonstrate that the PINN based on a 5th-order Radau IIA method achieves the highest level of system accuracy. Specifically, the absolute errors for all differential variables remains as low as \(10^{-6}\), and the absolute errors for algebraic variables is maintained at \(10^{-5}\), surpassing the results found in existing literature. Therefore, our method exhibits excellent computational accuracy and strong generalization capabilities, providing a feasible approach for the high-precision solution of larger-scale DAEs with higher indices or challenging high-dimensional partial differential algebraic equation systems. keywords: Differential algebraic equation, Radau IIA method, Physics-informed neural network, Domain decomposition + Footnote †: journal: ## 1 Introduction The concept of differential algebraic equations (DAEs) was formally proposed by Gear in the study of network analysis and continuous system simulation problems.[1] Petzold made it explicit through his study of numerical methods that DAEs are not ordinary differential equations (ODEs).[2] DAE systems are composed of coupled ODE systems and algebraic equation systems with physical significance. These systems encompass both differential and algebraic variables, and their system form is more generalized compared to traditional ODE systems. DAEs have gained significant attention since their inception, as they can accurately describe systems that some ODEs cannot represent. They have found extensive applications in various fields, including fluid dynamics, multi-body dynamics, electronic circuits, mechanical systems, control theory, and chemical engineering. In different developmental periods and research fields, DAEs are also known as singular systems, general systems, descriptor systems, or constrained systems, among other names. They often exhibit various structural forms, such as linear DAEs, nonlinear DAEs, semi-explicit DAEs, implicit DAEs, and Hessenberg-type DAEs. Fortunately, in practical physical modeling, most of the system models obtained are either low-index DAEs or high-index (\(>\)=2) Hessenberg-type DAEs.[3] The index of DAEs measures the 'distance' between DAEs and ODEs. Generally, a higher index implies greater difficulty in transforming DAEs into ODEs or in directly solving DAEs using ODE numerical methods. Traditional numerical methods for solving DAE systems include implicit Runge-Kutta methods[4], BDF methods[5], pseudospectral methods[6], adomian decomposition method[7], exponential integrators[8], generalized-\(\alpha\) methods[9], and Lie group methods[10; 11; 12]. It's worth noting that these direct numerical methods can solve DAEs with an index of 1. However, for high-index DAE systems, these methods are only applicable to a certain class of DAEs and may result in varying order reduction of numerical accuracy. With the rapid advancement of neural network technology and hardware resources, neural networks are demonstrating increasingly powerful capabilities. Compared to traditional nu merical computing methods, neural networks offer several advantages, including strong generalization, fault tolerance, and the ability for parallel computation. In 1998, Lagaris et al. [13] approximated solutions to ODEs or PDEs problems by constructing parameterized trial functions. These trial functions consist of two parts: one part satisfies initial conditions or boundary conditions which does not contain trainable parameters, while the other part is a simple feed forward neural network with trainable parameters. In 2019, Raissi et al. [14] introduced an important technique known as Physics-informed Neural Network (PINN) for the numerical approximation of partial algebraic equations (PDEs) problems. The PINN loss function includes not only initial or boundary conditions that reflect physical properties but also a residual term at selected points in the time-space domain where the PDEs hold. It's worth noting that PINN is a data-driven approach that doesn't require prior knowledge of the analytical form of the solution; instead, it learns the solution from data. Various variants of PINN have been proposed based on different collocation methods, such as variational hp-VPINN [15] and conservative PINN (CPINN) [16]. Additionally, PINN has been widely applied to solve problems in various fields, including fluid dynamics [17; 18], seismic wave prediction [19], and optical problems [20]. In recent years, many researchers have attempted to construct neural network models from different perspectives to solve various types of DAEs systems influenced by these methods. For Hessenberg-DAEs with control variables and an index of 3, Kozlov and Tiumentsev [21] achieved the implementation of BDFs method using a semi-empirical neural network model. Zhao Yang et al. [22] constructed a single-layer feed-forward neural network (FFNN) to solve Hessenberg-type DAEs systems. They augmented the loss function in their special Euler-Lagrange equation system with penalty terms for algebraic equations to avoid drifting in the results. Experimental results in their paper showed that the FFNN method with Sigmoid activation function provided approximate analytical solutions close to the numerical solutions of corresponding Runge-Kutta methods, but they didn't provide further details about the method's accuracy. For linear DAEs systems, Hongliang Liu et al. [23] selected Jacobi polynomials as activation functions and constructed a single-hidden-layer feed-forward neural network (JNN). They determined the network parameters using the classical ELM algorithm. Through experimental comparisons with other approximation methods such as Pade approximation, ADM method, and Adams methods, they illustrated the feasibility and superiority of the JNN method. It's worth noting that the examples in the paper involve DAEs with an index of 1 or linear DAEs that have been reduced to index 1. For DAEs systems with an index of 1, Moya et al. [24] proposed a neural network architecture called DAE-PINN based on the PINN method for solving DAEs systems. This neural network model is a discrete-time model based on the implicit Runge-Kutta method, which can directly address most index-1 differential-algebraic equation problems. However, it cannot solve high-index DAEs problems and suffers from low accuracy issues. To address the high-accuracy computation challenges in high-index DAEs systems, we have combined the Radau IIA numerical method with a neural network structure based on attention mechanisms. We have proposed a PINN computational framework based on the Radau method. Furthermore, we have improved the efficiency and accuracy of the solution by applying a strategy of domain decomposition. In section 2, we briefly introduce the fundamental concepts of DAEs systems, the Radau IIA numerical method, and the neural network structure based on attention mechanisms. Building upon this foundation, we provide a detailed construction of the PINN computing framework based on the Radau IIA method. Additionally, we employ a time domain decomposition strategy for neural network. Section 3 use the neural network designed in this paper to solve two high-index DAEs systems, and we analyze the solving accuracy of this neural network. Finally, we discuss and summarize the advantages, challenges, and potential avenues for improvement in the Radau-PINN architecture. ## 2 Scientific Machine Learning Methods This section first sequentially introduces the basic concepts of DAEs and the classical Radau IIA numerical method. Then, we introduce a neural network structure based on attention mechanisms. Building upon this, we construct a PINN based on the Radau IIA method. Finally, we enhance the efficiency and accuracy of neural network solutions for DAEs systems by utilizing the concept of domain decomposition. ### Radau IIA Method for DAE Systems This article first provides a brief introduction to DAEs with an index of 2, with the specific form as follows: \[\begin{cases}y^{\prime}(t)=&f(t,y(t),z(t)),\\ &0=&g(t,y(t)),\end{cases} \tag{1}\] where \(y(t)\in\mathbb{R}^{n}\) is the differential function variable, \(z(t)\in\mathbb{R}^{m}\) is the algebraic function variable, \(t\in[t_{0},T]\), \(t_{0}\) is the initial time point, and \(y_{0}=y(t_{0})\) is the initial value. Both \(f(t,y,z)\in\mathbb{R}^{n}\) and \(g(t,y)\in\mathbb{R}^{m}\) are sufficiently smooth, and the Jacobian matrix \(g_{y}f_{z}\) is non-singular. The Radau IIA method is a class of implicit Runge-Kutta methods, typically defined in the following general form: \[\xi_{i}=y_{n}+h\sum_{j=1}^{v}a_{i,j}f\left(\xi_{j},\xi_{j}\right), \tag{2}\] \[g\left(\xi_{i},\zeta_{i}\right)=0,\] (3) \[y_{n+1}=y_{n}+h\sum_{j=1}^{v}b_{j}f\left(\xi_{j},\xi_{j}\right),\] (4) \[g\left(y_{n+1},z_{n+1}\right)=0, \tag{5}\] where \(\xi_{i}=y\left(t_{n}+c_{i}h\right)\), \(\zeta_{i}=z\left(t_{n}+c_{i}h\right)\), \(h\) is the step size, \(n\) is the current step number, \(\left\{a_{ij},b_{j},c_{i}\right\}\) are parameters, and \(c_{i}=\sum\limits_{j=1}^{v}a_{ij}\), \(i,j=1,\cdots,v\). In table 1, different sets of parameters lead to different implicit Runge-Kutta methods, such as commonly used Gauss method, Radau method, and Lobatto method. These parameters are determined using Gauss polynomials, Radau polynomials, and Lobatto polynomials, respectively. Among them, the Radau IIA method is a high-precision numerical method with excellent numerical stability. Therefore, in this paper, the Radau IIA method is chosen, and the parameters need to satisfy the following conditions: \[B(2v-1):\sum_{i=1}^{v}b_{i}c_{i}^{k-1}=\frac{1}{k},\ k=1,\cdots,2v-1 \tag{6}\] \[C(v):\sum_{j=1}^{v}a_{ij}c_{j}^{k-1}=\frac{c_{i}^{k}}{k},\ k=1, \cdots,v\] (7) \[D(v-1):\sum_{i=1}^{v}b_{i}c_{i}^{k-1}a_{ij}=\frac{b_{j}}{k}(1-c_{ j}^{k}),\ k=1,\cdots,v-1 \tag{8}\] and \(c_{v}=1\), \(b_{j}=a_{vj}\), \(i,j=1,2,\ldots,v\). ### Neural Network Structure Based on Attention Mechanism Building upon the DAE-PINN structure, we employ adaptive activation functions (9) to train a neural network structure based on an attention mechanism. The specifics are as follows: The improved neural network model architecture based on attention mechanisms is primarily constructed using two Transformer networks, denoted as U and R, to build two stacked layer networks, as illustrated in Figure 1. Both neural networks map the input variable \(X\) (differential function variable \(y\)) to a high-dimensional feature space. Subsequently, each hidden layer forms new residual connections using element-wise multiplication operations, as expressed below: \[U = \phi(XW^{1}+b^{1}), \tag{9}\] \[R = \phi(XW^{2}+b^{2}),\] (10) \[H^{(1)} = \phi(\eta\cdot l\cdot XW^{o,1}+b^{o,1}),\] (11) \[M^{(k)} = \phi(H^{k}W^{o,k}+b^{o,k}),\] (12) \[H^{(k+1)} = (1-M^{(k)})\odot U+M^{(k)}\odot R,\] (13) \[P_{\theta}(X) = H^{d+1}W+b, \tag{14}\] where \(X\) represents the input vector of the neural network, \(W^{o,k}\) is the collection of weights for the \(o\)-th neuron in the \(k\)-th layer, \(b^{o,k}\) denotes the set of biases for the \(o\)-th neuron in the \(k\)-th layer, \(\phi\) is the activation function, \(\odot\) represents element-wise multiplication, \(d\) indicates the number of hidden layers (the depth of the neural network), \(P_{\theta}(X)\) is the final output vector of the neural network, \(\eta\) is a predetermined hyper-parameter that ensures the slope is greater than 1, and \(l\) is a parameter that can modify the slope of the activation function. ### PINN Based on Radau IIA Method In this section, we use the discrete-time model of PINN as the foundation, incorporating a neural network structure based on attention mechanisms. We have constructed a PINN architecture based on the Radau IIA method, as illustrated in Figure 2. Firstly, construct a neural network with multiple inputs and multiple outputs, where the inputs consist of the collection of differential variables \(y_{n}\), and outputs \[\xi_{1}^{\theta},\xi_{2}^{\theta},.....,\xi_{v}^{\theta},{y_{n+1} }^{\theta}; \tag{15}\] \[\zeta_{1}^{\theta},\zeta_{2}^{\theta},.....,\zeta_{v}^{\theta},z _{n+1}}^{\theta}. \tag{16}\] The first \(v\) values of \(\xi_{i}^{\theta}\) represent intermediate differential variables, and the first \(v\) values of \(\zeta_{i}^{\theta}\) represent intermediate algebraic variables, where \(i=1,2,.....,v\). Secondly, based on the structure of the DAEs system with an index of 2 and the characteristics of the Radau IIA method, we further design neural network structures based on the attention mechanism for both the differential variable part and the algebraic variable part of the system. There are two specific design approaches: one assigns a single neural network to all differential variables and two neural networks to the algebraic variables, and the other assigns a separate neural network to each differential variable while keeping the algebraic variable part unchanged. In the case of the algebraic variable part, one of the neural networks is used to predict the first v values, while the other neural network is used to predict the v+1-th value. Theoretically, the second approach (as shown in Figure 2) constructs a neural network for each individual differential or algebraic variable, thereby improving the overall model's accuracy and generalization. Through further testing, the second approach's training results are more precise than those of the first approach, consistent with the expected results. As a result, all subsequent experiments in this paper are implemented based on the second approach. Thirdly, based on the designed network structure, this paper constructs the loss function as follows: \[\mathcal{L}\left(\theta;\mathcal{T}\right)=W_{f}\mathcal{L}_{f}\left(\theta; \mathcal{T}\right)+W_{g}\mathcal{L}_{g}\left(\theta;\mathcal{T}\right)+W_{s} \mathcal{L}_{s}\left(\theta;\mathcal{T}\right), \tag{17}\] * \(\mathcal{L}_{f}\left(\theta;\mathcal{T}\right)\) is the loss related to the differential network and is expressed as follows: \[\frac{1}{N_{T}\left(v+1\right)}\sum_{k=1}^{N_{T}}\sum_{i=1}^{v+1} \left\|v_{n,k}-y_{n,k}^{i}\left(\theta\right)\right\|_{2}^{2},\] (18) \[y_{n,k}^{i}(\theta)=\xi_{i,k}^{\theta}-h\sum_{j=1}^{v}a_{i,j}f( \xi_{j,k}^{\theta},\xi_{j,k}^{\theta}),\] (19) \[i=1,\cdots,v,\] \[y_{n,k}^{v+1}(\theta)=y_{n+1,k}^{\theta}-h\sum_{i=1}^{v}b_{i}f( \xi_{i,k}^{\theta},\xi_{i,k}^{\theta});\] (20) * \(\mathcal{L}_{g}\left(\theta;\mathcal{T}\right)\) is the loss associated with the algebraic network \begin{table} \begin{tabular}{c|c c c c c} \(c_{1}\) & \(a_{11}\) & \(a_{12}\) & \(a_{13}\) & \(\cdots\) & \(a_{1v}\) \\ \(c_{2}\) & \(a_{21}\) & \(a_{22}\) & \(a_{23}\) & \(\cdots\) & \(a_{2v}\) \\ \(c_{3}\) & \(a_{31}\) & \(a_{32}\) & \(a_{33}\) & \(\cdots\) & \(a_{3v}\) \\ \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\ddots\) & \(\vdots\) \\ \(c_{v}\) & \(a_{v1}\) & \(a_{i2}\) & \(a_{v3}\) & \(\cdots\) & \(a_{vv}\) \\ \hline & \(b_{1}\) & \(b_{2}\) & \(b_{3}\) & \(\cdots\) & \(b_{v}\) \\ \end{tabular} \end{table} Table 1: The parameter table of the \(v\)-stage implicit Runge-Kutta methods. and can be expressed as follows: \[\frac{1}{N_{\mathcal{T}}\left(\nu+1\right)}\sum_{k=1}^{N_{\mathcal{T}}} \left(\sum_{i=1}^{v}\left\|g\left(\xi_{i,k}^{\theta},\xi_{i,k}^{\theta}\right) \right\|_{2}^{2}+\left\|g\left(y_{n+1,k}^{\theta},\xi_{n+1,k}^{\theta}\right) \right\|_{2}^{2}\right); \tag{21}\] * \(\mathcal{L}_{s}\left(\theta;\mathcal{T}\right)\) is the loss related to the last value of the controlled algebraic variable and can be expressed as follows: \[\frac{1}{N_{\mathcal{T}}}\sum_{k=1}^{N_{\mathcal{T}}}\left(\left\|\xi_{n+1,k} ^{\theta}-\zeta_{v,k}^{\theta}\right\|_{2}^{2}\right),\] (22) where \(W_{f}\) represents the loss weight for the differential neural network, \(W_{g}\) is the weight for the algebraic neural network, and \(W_{s}\) signifies the weight for the control of algebraic variable prediction neural network. The parameters \(a_{i,j}\) and \(b_{i}\) are specific to the Radau IIA method. \(\mathcal{T}\) is the total number of samples, \(N_{\mathcal{T}}\) is the number of training samples in the current batch, and \(\theta\) denotes the neural network parameters. Here, \(f\) represents the differential network, and \(g\) represents the algebraic network. \(y_{n,k}\) corresponds to the sample data of the model, \(\xi_{i,k}^{\theta}\) stands for the values of intermediate differential variables, and \(\zeta_{i,k}^{\theta}\) signifies the values of intermediate algebraic variables. Furthermore, \(y_{n,k}^{i}\left(\theta\right)\) represents the output values of the differential neural network. The notation \(\left\|\cdot\right\|_{2}^{2}\) refers to the square of the L2 norm, \(z_{n+1,k}^{\theta}\) represents the final output of the algebraic neural network, and \(\zeta_{n,k}^{\theta}\) denotes the penultimate output of the algebraic neural network. Finally, we use gradient descent to solve for the weights, biases, and other parameters of the PINN, \[\theta^{*}=\arg\min_{\theta}\mathcal{L}\left(\theta;\mathcal{T}\right). \tag{23}\] ### Time Domain Decomposition of Neural Networks In this section, based on an analysis of the existing limitations of the PINN architecture, we adopt a time-domain decomposition strategy using neural networks. One limitation of the PINN model is that it exhibits relatively low accuracy in predicting solutions. This is because the inherent inaccuracies involved in solving high-dimensional non-convex optimization problems can lead to local minima, making it challenging to achieve absolute errors below \(10^{-5}\). Another evident limitation is the high training cost.[16] Similarly, Figure 1: Improved attention neural network structure. Figure 2: The schematic diagram of PINN based on Radau IIA method. our proposed PINN model based on the Radau IIA method may encounter similar issues. Furthermore, the iterative format of the Radau IIA method does not fully exploit its high-precision advantages during training. To address these issues, we propose a time-domain decomposition strategy for neural networks, as illustrated in Figure 3. With this approach, we partition the original problem into segments, which not only enhances solution accuracy but also leverages the advantages of iterative training. In other words, the predicted values from the previous time segment can serve as input values for the subsequent segment. This means that knowing the data values at the initial point \(t_{0}\) for the first segment is sufficient to iteratively compute the solutions over the entire time domain. This approach significantly reduces the amount of required data. Specifically, only the data at the initial point \(t_{0}\) for a set of differential variables, denoted as \(y_{0}\), is needed. Using the time-domain decomposition structure, we can iteratively determine the desired values within the range \([t_{0},T]\). This involves information related to \((T-t_{0})/h\cdot v\) data points, which reduces the need for extensive training data. On the other hand, if we can obtain the initial values for each network at every time segment, parallel training of each neural network becomes possible, significantly reducing the model training time. ## 3 Numerical Experiments In this section, we apply PINN based on the Radau IIA method to solve two high-index DAEs systems separately and further investigate the influence of the order of the Radau IIA method on the solution results. The experiments were conducted on a Windows 10 operating system with an Intel(R) Core(TM) i7-10875H CPU @ 2.30GHz processor. We used Python 3.9 software and coded the neural network architecture using PyTorch 1.12.1, the GPU version. Additionally, this paper involves two formulas to measure the accuracy of the experiments. One is the commonly used Absolute Error (AE) formula, defined as \(AE=|y_{true}-y_{pred}|\), which reflects the magnitude of the deviation between the neural network's predicted solution and the true solution. The other metric is the Mean Absolute Error (MAE) formula, defined as \(MAE=\frac{1}{n}\sum_{i=1}^{n}\|y_{true}^{i}-y_{pred}^{i}|\), used to assess the differences in accuracy among different orders of the Radau IIA method. ### Hessenberg-type DAEs System In this section, we explore classical Hessenberg-type DAE systems with an index of 2 that possess exact analytical solutions [12], as follows: \[\begin{cases}y_{1}^{\prime}(t)=&(y_{3}(t)y_{4}(t)+y_{1}(t)y_{2}(t))y_{5}(t),\\ y_{2}^{\prime}(t)=&-y_{3}(t)y_{4}(t)y_{2}(t)^{2}y_{5}(t),\\ y_{3}^{\prime}(t)=&2y_{3}(t)y_{4}(t)y_{1}(t)y_{2}(t),\\ y_{4}^{\prime}(t)=&-y_{3}(t)y_{4}(t)y_{2}(t)^{2},\\ 0=&y_{1}(t)y_{4}(t)-y_{2}(t)y_{3}(t),\end{cases} \tag{24}\] where \(t\in[0,1]\), and the initial values \(y_{0}=(1,1,1,1,1)\). The functions \(y_{1}(t),y_{2}(t),y_{3}(t),y_{4}(t)\) represent differential variables, while \(y_{5}(t)\) is an algebraic variable. The system's exact solution expressions are \(y_{1}(t)=e^{2t}\), \(y_{2}(t)=e^{-t}\), \(y_{3}(t)=e^{2t}\), \(y_{4}(t)=e^{-t}\), and \(y_{5}(t)=e^{t}\). Firstly, we consider the impact of different orders of Radau IIA methods, including 3rd, 5th, 9th, and 13th orders (corresponding to \(v=2,3,5,7\)), on the precision of neural network solutions. Secondly, we explore the influence of activation functions on PINN. Common activation functions for hidden layers include Sigmoid, TanH, Sin, and ReLu, among others. When solving smoothly continuous systems, ReLu is generally not chosen; instead, Sigmoid, TanH, or Sin activation functions are preferred. In the experiments, _Sigmoid_ resulted in better approximate solutions. Within this neural network framework, the initial values of the differential variables, namely \(y_{1}(t),y_{2}(t)\), \(y_{3}(t)\), and \(y_{4}(t)\) for each time segment, are used as a dataset for training. The step size \(h\) is 0.05, which means that each time Figure 3: The time domain decomposition of neural networks. interval has a length of 0.05. Each network model in every time segment comprises 5 hidden layers, with each hidden layer containing 100 neurons. Sigmoid is used as the activation function, and the Adam optimizer is applied for 100,000 iterations. The experimental results within the time interval of 0 to 1 are presented in Figures 4. From Figure 4, it is evident that the accuracy of the mean absolute errors for the 3rd and 13th-order Radau IIA methods corresponds to the blue Y-axis, while the accuracy of the average absolute errors for the 5th and 9th-order Radau methods corresponds to the red Y-axis. For all the differential function variables, the 3rd and 13th-order Radau IIA methods exhibit significantly higher average absolute errors compared to the 5th and 9th-order methods. For the algebraic variable \(y_{5}\), the 13th-order Radau IIA method has notably higher average absolute errors than the 3rd, 5th, and 9th-order methods. Additionally, we further observe that for all differential function variables from red Y-axis, the 9th-order Radau IIA method's overall trend in average absolute errors is significantly higher than the 5th-order method. For the algebraic variable \(y_{5}\), the 9th-order Radau IIA method exhibits notably higher average absolute errors than the 5th-order method. In other words, the 5th-order Radau IIA-based PINN achieves the highest precision in terms of average absolute errors. The absolute error results obtained using the 5th-order method are shown in Figure 5. The accuracy of the absolute errors for \(y_{1}(t)\), \(y_{3}(t)\), and \(y_{5}(t)\) corresponds to the blue Y-axis, while the accuracy of the absolute errors for \(y_{2}(t)\) and \(y_{4}(t)\) corresponds to the red Y-axis. From the figure, it is evident that the neural network's predicted values for all four differential variables have their lowest precision of absolute errors maintained at the order of \(10^{-6}\), while the lowest precision of absolute errors for the algebraic variable is kept at \(10^{-6}\). The experimental results suggest that the neural network's predicted solutions have reached a high level of accuracy. For the neural network structure designed in this paper, the predicted values of the differential variables can be used as the initial values for the next time step's network input dataset. The Figure 4: The mean absolute errors for solving Hessenberg-DAEs systems using PINN based on Radau IIA of order \(v=3,5,9,13\). The blue curves represent the mean absolute errors \(v=3,13\) on the left Y-axis, while the red curves correspond to \(v=5,9\) on the right Y-axis. precision of the differential variables can affect the results of the next time step's network. In this context, the precision of the differential variables \(y_{1}\) and \(y_{3}\) is already at the order of \(10^{-6}\), and the precision of the differential variables \(y_{2}\) and \(y_{4}\) is at the order of \(10^{-7}\), which will not significantly affect the precision of the next time step. ### DAE System of the Pendulum Model In this section, we study the classical pendulum DAEs system with an index of 2, as follows: \[\left\{\begin{array}{l}y_{1}^{\prime}(t)= y_{3}(t),\\ y_{2}^{\prime}(t)= y_{4}(t),\\ y_{3}^{\prime}(t)=- y_{1}(t)y_{5}(t),\\ my_{4}^{\prime}(t)=- y_{2}(t)y_{5}(t)-\lambda,\\ 0= y_{1}(t)y_{3}(t)+y_{2}(t)y_{4}(t),\end{array}\right. \tag{25}\] where \(t\in[0,1]\), and the parameters \(m\) and \(\lambda\) are variable parameters, both set to 1 in the experiments of this section. The initial values are \(y_{0}=(1,0,0,1,1)\). In this context, \(y_{1}(t)\), \(y_{2}(t)\), \(y_{3}(t)\), and \(y_{4}(t)\) are differential function variables, while \(y_{5}(t)\) is an algebraic function variable. This DAEs system does not have an exact analytical expression. In this paper, we directly solve the reduced inner ODEs of this system using high-precision ODE solvers from the Python scientific computing library _Scipy_ and compare the obtained approximate solution with the predicted values from the neural network. Similarly, we consider the impact of different orders (3, 5, 9, 13, corresponding to \(v=2,3,5,7\)) in the Radau IIA methods on the accuracy of the neural network's solutions. Secondly, we explore the effect of activation functions on PINN. In this experiment, the _Sin_ activation function provides a better approximation. To maintain consistency in the numerical experiments, other network structural information is consistent with the experiments in the previous section. The results obtained are shown in Figures 6. From Figure 6, we can observe that the accuracy of the mean absolute errors for the 3rd and 13th-order Radau IIA methods corresponds to the blue Y-axis, while the accuracy of the average absolute errors for the 5th and 9th-order Radau methods corresponds to the red Y-axis. For the differential function variables \(y_{1}(t)\), \(y_{3}(t)\), and \(y_{4}(t)\), the 3rd-order Radau IIA method has significantly higher average absolute errors than the 5th, 9th, and 13th-order methods. For the differential function variable \(y_{2}(t)\), the 13th-order Radau IIA method exhibits significantly higher average absolute errors than the 3rd, 5th, and 9th-order methods. For the algebraic variable \(y_{5}(t)\), the 3rd and 13th-order Radau IIA methods have significantly higher average absolute errors compared to the 5th and 9th-order methods. Additionally, we further observe that for all differential function variables from red Y-axis, the 9th-order Radau IIA method's average absolute error overall trends similarly to the 5th-order method. For the algebraic variable \(y_{5}(t)\), the 9th-order Radau IIA method exhibits significantly higher average absolute errors in the later time regions compared to the 5th-order method. In other words, a PINN based on the 5th-order Radau Figure 5: The absolute errors of Hessenberg-DAEs system solved by PINN based on 5th-order Radau IIA. The blue curves represent the absolute errors of \(y_{1}\), \(y_{3}\) and \(y_{5}\) on the left Y-axis, while the red curves correspond to \(y_{2}\) and \(y_{4}\) on the right Y-axis. IIA method achieves the highest precision in terms of average absolute errors. The absolute error results obtained using the 5th-order method are shown in Figure 7. The accuracy of the absolute errors for \(y_{1}(t)\), \(y_{2}(t)\), \(y_{3}(t)\), and \(y_{4}(t)\) corresponds to the blue Y-axis, while the accuracy of the absolute errors for \(y_{5}(t)\) corresponds to the red Y-axis. From the figure, we can see that the lowest precision of absolute errors for all four differential variables is maintained at \(10^{-7}\), while the lowest precision of absolute errors for the algebraic variable is kept at \(10^{-5}\). The experimental results suggest that the neural network's predicted solutions for the pendulum's DAEs system can also achieve high precision. ## 4 Summary and Conclusions DAE systems are widely employed in various domains, including fluid dynamics, multibody dynamics, and control theory. In practical physical modeling, most DAE models are either low-index DAEs or high-index Hessenberg-type DAEs. Classical implicit numerical methods are suitable for a certain class of high-index DAEs, but they often lead to varying order reduction of numerical accuracy. Recently, a novel neural network method, DAE-PINN, has been developed for solving low-index DAEs. However, it cannot directly handle high-index systems. Therefore, this paper proposes a PINN-based approach using the Radau method to solve high-index DAEs systems. This method combines the strengths of the Radau IIA method with a neural network structure based on attention mechanisms and employs a time-domain decomposition strategy to enhance both efficiency and accuracy in solving these systems. In this paper, two high-index systems, namely Hessenberg-type DAEs and pendulum model DAEs, are studied as examples. The research takes into account the influence of different orders in the Radau IIA methods and the activation functions on the accuracy of neural network solutions. Generally, employing higher-order Radau IIA methods enhances the neural network's generalization capability. However, through compara Figure 6: The mean absolute errors for solving single Pendulum DAEs systems using PINN based on Radau IIA of order \(v=3,5,9,13\). The blue curves represent the mean absolute errors \(v=3,13\) on the left Y-axis, while the red curves correspond to \(v=5,9\) on the right Y-axis. tive experiments with two examples, it is found that PINN based on the 5th-order Radau IIA method provide the highest accuracy in solving the systems. This conclusion is consistent with the notion that Radau-5 is a high-precision numerical method [3]. Further experimental results indicate that in high-index systems, the absolute errors for all differential variables maintain a minimum precision of \(10^{-6}\), while the absolute errors for algebraic variables maintain a minimum precision of \(10^{-5}\). This method's numerical accuracy surpasses the corresponding results in the literature [22] and, to some extent, surpasses the accuracy achieved by the DAE-PINN method [24]. This demonstrates that our method can directly and accurately solve high-index DAEs systems, showcasing strong generalization capabilities and offering a viable approach for high-precision solutions to even higher-index DAEs or challenging systems of partial differential algebraic equations. Furthermore, we have maintained the depth and width of the neural networks as in DAE-PINN [24] and have not delved into a detailed study of their impact on the accuracy of our method, which we will need to investigate in our future work. ## Acknowledgements Project supported by the National Natural Science Foundation of China (Grant No. 12201144), the the Guangdong Basic and Applied Basic Research Foundation of China (Grant No. 2020A1515110554), the Science and Technology Foundation of Guizhou Province (Grant No. QKHJCZK[2021]YB015) of China, and Chongqing Talents Plan Youth Top-notch Project of China (Grant No. 2021000263).
2305.08316
SemiGNN-PPI: Self-Ensembling Multi-Graph Neural Network for Efficient and Generalizable Protein-Protein Interaction Prediction
Protein-protein interactions (PPIs) are crucial in various biological processes and their study has significant implications for drug development and disease diagnosis. Existing deep learning methods suffer from significant performance degradation under complex real-world scenarios due to various factors, e.g., label scarcity and domain shift. In this paper, we propose a self-ensembling multigraph neural network (SemiGNN-PPI) that can effectively predict PPIs while being both efficient and generalizable. In SemiGNN-PPI, we not only model the protein correlations but explore the label dependencies by constructing and processing multiple graphs from the perspectives of both features and labels in the graph learning process. We further marry GNN with Mean Teacher to effectively leverage unlabeled graph-structured PPI data for self-ensemble graph learning. We also design multiple graph consistency constraints to align the student and teacher graphs in the feature embedding space, enabling the student model to better learn from the teacher model by incorporating more relationships. Extensive experiments on PPI datasets of different scales with different evaluation settings demonstrate that SemiGNN-PPI outperforms state-of-the-art PPI prediction methods, particularly in challenging scenarios such as training with limited annotations and testing on unseen data.
Ziyuan Zhao, Peisheng Qian, Xulei Yang, Zeng Zeng, Cuntai Guan, Wai Leong Tam, Xiaoli Li
2023-05-15T03:06:44Z
http://arxiv.org/abs/2305.08316v1
SemiGNN-PPI: Self-Ensembling Multi-Graph Neural Network for Efficient and Generalizable Protein-Protein Interaction Prediction ###### Abstract Protein-protein interactions (PPIs) are crucial in various biological processes and their study has significant implications for drug development and disease diagnosis. Existing deep learning methods suffer from significant performance degradation under complex real-world scenarios due to various factors, _e.g._, label scarcity and domain shift. In this paper, we propose a self-ensembling multi-graph neural network (SemiGNN-PPI) that can effectively predict PPIs while being both efficient and generalizable. In SemiGNN-PPI, we not only model the protein correlations but explore the label dependencies by constructing and processing multiple graphs from the perspectives of both features and labels in the graph learning process. We further mary GNN with Mean Teacher to effectively leverage unlabeled graph-structured PPI data for self-ensemble graph learning. We also design multiple graph consistency constraints to align the student and teacher graphs in the feature embedding space, enabling the student model to better learn from the teacher model by incorporating more relationships. Extensive experiments on PPI datasets of different scales with different evaluation settings demonstrate that SemiGNN-PPI outperforms state-of-the-art PPI prediction methods, particularly in challenging scenarios such as training with limited annotations and testing on unseen data. ## 1 Introduction Protein-protein Interactions (PPIs) are central to various cellular functions and processes, such as signal transduction, cell-cycle progression, and metabolic pathways [1]. Therefore, the identification and characterization of PPIs are of great importance for understanding protein functions and disease occurrence, which can potentially facilitate therapeutic target identification [23] and the novel drug design [21]. In past decades, high-throughput experimental methods, _e.g._, yeast two-hybrid screens (Y2H) [22], and mass spectrometric protein complex identification (MS-PCI) [15] have been developed to identify PPIs. Nevertheless, genome-scale experiments are expensive, tedious, and time-consuming while suffering from high error rates and low coverage [14]. As such, there is an urgent need to establish reliable computational methods to identify PPIs with high quality and accuracy. In recent years, a large variety of high-throughput computational approaches for PPI prediction have been proposed, which can be broadly divided into two groups: classic machine learning (ML)-based methods [1, 18, 19, 20, 21, 22] and deep learning (DL)-based methods [23, 24, 25, 26]. Compared to classic ML methods, DL algorithms are capable of processing complicated and large-scale data and extracting useful features automatically, achieving significant success in a diverse range of bioinformatics applications [19, 20], including PPI prediction [27]. Most existing DL-based methods treat interactions as independent instances, ignoring protein correlations. PPI can be naturally formulated as graph networks with proteins and interactions represented as nodes and edges, respectively [20, 21]. To improve PPI prediction performance, recent works [20, 13] have been proposed to investigate the correlations between PPIs using various graph neural network (GNN) architectures [15, 22]. However, they are limited by ignoring learning label dependencies for multi-type PPI prediction. It has recently become common practice to employ Graph Convolutional Networks (GCNs) to capture label correlation in a wide range of multi-label tasks [23, 22]. Nevertheless, multi-label learning utilizing label graphs predominantly works in the visual domain and has yet to be extended to PPI prediction tasks. In general, a desired PPI prediction framework should be efficient, transferable, and generalizable, whereas two ma jor bottlenecks deriving from imperfect datasets have hindered the development of such models. **Label scarcity:** Despite the tremendous progress in PPI research using various computational and experimental methods, many interactions still need to be annotated from experimental data. Consequently, only a small portion of labeled samples can be used for model training. It can be a significant bottleneck in obtaining robust and accurate PPI prediction models. **Domain shift:** Most existing methods are only developed and validated using in-distribution data ( _i.e._, trainset-homologous testsets), receiving severe performance degradation when being deployed to unseen data with different distributions ( _i.e._, trainset-heterologous testsets). Although [11] design new evaluations to better reflect model generalization, giving instructive and consistent assessment across datasets, the domain shift issue still needs to be fully explored for PPI prediction. Therefore, how to deal with imperfect data for improving model efficiency and generalization remains a vital issue in PPI prediction. Recent studies [22, 23] show that self-ensemble methods with semi-supervised learning (SSL) [16, 17] have demonstrated effectiveness in addressing both label scarcity and domain shift. In this work, to tackle the above challenges and limitations, we propose an efficient and generalizable **PPI** prediction framework, referred to as **S**elf-**e**nsembling **m**ulti-**G**raph **N**eural **N**etwork (**SemiGNN-PPI**). Firstly, we propose leveraging graph structure to model protein correlations and label dependencies for multi-graph learning. Specifically, we learn inter-dependent classifiers to extract information from the label graph, which are then applied to the protein representations aggregated by neighbors in the protein graph for multi-type PPI prediction. Secondly, we propose combining GNN with Mean Teacher [17], a powerful SSL model, to explore unlabeled data for self-ensemble graph learning. In our framework, the student model learns to classify the labeled data accurately and also distills the knowledge beneath unlabeled data from the teacher model with multiple graph consistency constraints for improving the model performance under complex scenarios. To the best of our knowledge, this is the first study to explore efficient and generalizable multi-type PPI prediction. Precisely, the main contributions of the work can be summarized as follows: * For multi-type PPI prediction, we first investigate the limitations and challenges of existing methods under complex but realistic scenarios, and then propose an effective **S**elf-**e**nsembling **m**ulti-**G**raph **N**eural **N**etwork-based **PPI** prediction (**SemiGNN-PPI**) framework for improving model efficiency and generalization. * In SemiGNN-PPI, we construct multiple graphs to learn correlations between proteins and label dependencies simultaneously. We further advance GNN with Mean Teacher to effectively utilize unlabeled data by consistency regularization with multiple constraints. * Extensive experiments on three PPI datasets with different settings demonstrate that SemiGNN-PPI outperforms other state-of-the-art methods for multi-label PPI prediction under various challenging scenarios. ## 2 Related Work **Protein-Protein Interaction Prediction.** Amino acid sequence-based methods have received considerable attention in PPI prediction. Early works leverage machine learning (ML) techniques [1, 1, 10, 11, 12] to map pairs of handcrafted sequence features of proteins to interaction types. With the advent of deep learning (DL), more recent works have utilized deep neural networks [13, 14, 15, 16] to automatically extract features from protein sequences for enhancing feature representation. Furthermore, the latest works consider protein correlations and utilize graph neural networks (GNN) to model graph-structured PPI data [13, 12, 11]. However, it is essential to explore label dependencies for improving the model performance, which has long been ignored for multi-type PPI prediction. Moreover, the generalization and efficiency problems for PPI prediction are still under-explored under complex scenarios, such as data scarcity and distribution shift. **Multi-Label Learning.** MLL addresses the problem of assigning multiple labels to a single instance. It has been utilized successfully in numerous fields, _e.g._, computer vision [13, 14]. Traditional MLL methods typically train independent classifiers for all labels but fail to consider the potential label interdependence, leading to suboptimal performance. Recent trends in MLL incorporate deep learning to capture the label dependencies [15, 16]. For example, CNN-RNN [15] leverages recurrent neural networks (RNNs) to transform the label vectors into an embedded space to learn label correlations implicitly. More recently, graph-based MLL methods have aroused great attention from researchers [14, 15]. Especially, ML-GCN [14] successfully applies Graph Convolutional Network (GCN) by constructing a directed graph over object labels to explicitly model the label dependencies adaptively. In this regard, we propose to explore correlations between PPI types with GCN on the structured label graph for more accurate PPI prediction. **Learning from Imperfect Data.** In recent years, deep learning has made tremendous progress in numerous domains, _e.g._, computer vision and bioinformatics. However, the applicability of deep learning is limited by heavy reliance on training data. We rarely have a perfect dataset for model training [17, 18, 19], especially in biomedical imaging and bioinformatics [23, 16, 15]. The commonly encountered challenges in PPI prediction include label scarcity, where only limited annotations are available for training (semi-supervised learning, SSL), and domain shift, where unseen data (target domain) with different distributions from training data (source domain) is used for testing (unsupervised domain adaptation, UDA). In this regard, model efficiency and generalization would be heavily constrained, limiting the wide real-world applications. Self-ensemble learning [16] is one of the most prevalent methods for SSL, which works by enforcing consistency in model predictions from different epochs with the network parameter average [14]. Recently, self-ensemble learning has been extended to visual domain adaptation tasks [1, 15, 16], achieving promising UDA performance. Inspired by these observations, we advance GNN with self-ensemble learning to handle imperfect data for efficient and generalizable PPI prediction. ## 3 Methodology ### Task Definition Given a set of proteins \(P=\{p_{0},p_{1},...,p_{n}\}\) and a set of PPIs \(E=\{e_{ij}=\{p_{i},p_{j}\}|i\neq j,p_{i},p_{j}\in P,I(e_{ij})\in 0,1\}\), where \(I(e_{ij})\) is a binary PPI indicator function that is \(1\) if the PPI between proteins \(p_{i}\) and \(p_{j}\) has been confirmed, and \(0\) otherwise, the types of PPI can be represented by the label space \(C=\{c_{0},c_{1},...,c_{t}\}\) with \(t\) different types of interactions, and the labels for a confirmed PPI \(e_{ij}\) can be represented as \(y_{ij}\subseteq C\). The goal of multi-type PPI learning is to learn a function \(f:e_{ij}\rightarrow\hat{y}_{ij}\) from the training set \(E^{s}_{train}\) such that for any PPI \(e_{ij}\in E^{s}_{test}\), \(\hat{y}_{ij}\) is the set of predicted labels for \(e_{ij}\). To investigate the efficiency and generalization under complex scenarios beyond the supervised learning setting, we introduce the settings of semi-supervised learning (SSL) and unsupervised domain adaptation (UDA). In the SSL setting, the training datasets consist of limited labeled data \(E^{l}_{train}\) and unlabeled data \(E^{u}_{train}\) due to label scarcity. In the UDA setting, the model trained on \(E^{s}_{train}\) is tested on the unseen data \(E^{t}_{test}\) with different distribution. ### Overview Fig. 1 depicts the overview of our proposed SemiGNN-PPI framework. We first construct the multi-graph encoding (MGE) module to effectively leverage available labeled data, which includes a protein graph encoding (PGE) network for exploring protein relations and a label graph encoding (LGE) network for learning label dependencies. To exploit knowledge from unlabeled data, we build a teacher network with the same architecture as the student network. During teacher-student training, multiple graph consistency constraints at both node and edge levels are utilized to enhance knowledge distillation for self-ensemble multi-graph learning. ### Multi-Graph Encoding **Protein-Graph Encoding.** Early works [15, 13] have demonstrated the effectiveness of graph neural networks (GNNs) on PPI prediction. Considering the correlation of PPIs, we use proteins as nodes and PPIs as edges to build the PPI graph \(G=(P,E)\). Then, the PPI prediction can be formulated from \(f(e_{ij}[p_{i},p_{j},\theta)\rightarrow\hat{y}_{ij}\) to \(f(e_{ij}|G,\theta)\rightarrow\hat{y}_{ij}\). GNNs take the graph structure and sequence-based protein attributes as inputs to model high-level compact representation of the nodes (proteins), denoted by \(H\in\mathbb{R}^{|P|\times d}\) where \(h_{p}=H[p,:]\) is the latent representation of node \(p\), and \(d\) is the dimensionality of protein features. In general, GNNs follow a recursive neighborhood aggregation scheme to iteratively update the representation of each node by aggregating and transforming the representations of its neighboring nodes. After \(l\) iterations, the transformed feature of node \(p\) can be denoted as: \[h^{(l)}_{p}=\phi^{(l)}(h^{(l-1)}_{p},f^{(l)}(\{h^{(l-1)}_{p}:u\in\mathcal{N}_{ k}(p)\})), \tag{1}\] where \(\mathcal{N}_{k}(p)\) denotes the set of \(k\)-hop neighbors of the node \(p\); \(f^{(l)}\) and \(\phi^{(l)}\) are an aggregation function and a combination function, respectively. Following Graph Isomorphism Network (GIN) [13], we adopt the summation function to aggregate the representations of neighboring nodes and use the multi-layer perceptrons (MLPs) to update the aggregated features. Then, the update rule of the hidden node Figure 1: The overall framework of SemiGNN-PPI. First, we generate two augmented graph views with node and edge manipulations. Then, protein graphs and label graphs are fed into the multi-graph teacher-student network, which models both protein relations and label dependencies for self-ensemble learning. Simultaneously, to better capture fine-grained structural information, we align student and teacher feature embeddings by jointly optimizing multiple graph consistency constraints (node matching and edge matching). features with a learnable parameter \(\epsilon\) in PGE is defined as: \[h_{p}^{(l)}=g^{l}((1+\epsilon^{l})\cdot h_{p}^{(l-1)}+\sum\nolimits_{u\in\mathcal{ N}_{k}(p)}h_{u}^{(l-1)}). \tag{2}\] **Label-Graph Encoding.** In multi-label PPI prediction, correlations exist among different types of interactions, _i.e._, some PPI types may appear together frequently while others rarely appear together. Following [3], we model the interdependencies between different PPI types (labels) using a graph and learn inter-dependent classifiers with Graph Convolutional Network (GCN), which can be directly applied to protein features for multi-type PPI prediction. GCN aims to learn a function \(f(\cdot,\cdot)\) on the graph with \(t\) nodes. Each GCN layer can be formulated as follows: \[h_{c}^{(l+1)}=f(h_{c}^{(l)},A),A\in\mathbb{R}^{t\times t}, \tag{3}\] where \(h_{c}^{(l+1)}\in\mathbb{R}^{t\times d_{l}^{\prime}}\) and \(h_{c}^{(l)}\in\mathbb{R}^{t\times d_{l}}\) are the learned \(d_{l}^{\prime}\)-dimensional node features from current layer and the \(d_{l}\)-dimensional node features from previous layer, respectively. \(A\) is the corresponding correlation matrix. With the convolutional operation, \(f(\cdot,\cdot)\) can be further expressed as: \[h_{c}^{(l+1)}=\delta\left(\widehat{A}h_{c}^{(l)}W^{l}\right), \tag{4}\] where \(\delta(\cdot)\) is a non-linear function set as LeaklyReLU following [3], \(\widehat{A}\) is the normalized version of \(A\) and \(W^{l}\in\mathbb{R}^{d_{l}^{\prime}\times d_{l}}\) is a transformation matrix. We leverage stacked GCNs to learn inter-dependent classifiers \(W\). The first GCN layer takes word embeddings \(E_{l}\in\mathbb{R}^{|t|\times d_{l}}\) of labels and the correlation matrix \(A\in\mathbb{R}^{t\times t}\) as inputs. Considering that PPI type maps are semantic, we apply the BioWordVec model [12] pretrained on the biomedical corpus for generating word embeddings \(E_{l}\) of each PPI type to better capture their semantics. To construct the label correlation matrix \(A\), we compute the conditional probability of different labels within the training dataset. To avoid noises and over-smoothing, we binarize \(A\) with a threshold \(\tau\) and then re-weight it with a weight \(p\) to obtain \(\widehat{A}\). **Multi-Graph Based Classifier Learning.** By applying the learned classifiers \(W=\{w_{i}\}_{i=1}^{t}\) from label graph encoding (LGE) to the learned representations from protein graph encoding (PGE) for the PPI \(e_{ij}\), we can obtain the predicted scores \(\hat{y}_{ij}\), expressed as: \[\hat{y}_{ij}=W(h_{p_{i}}\cdot h_{p_{j}}). \tag{5}\] We use the traditional multi-label classification loss function to update the whole network in an end-to-end manner. The loss function can be written as: \[\mathcal{L}_{sup}=\sum_{c=1}^{t}\left(y^{c}\log\left(\sigma\left(\hat{y}^{c} \right)\right)+(1-y^{c})\log\left(1-\sigma\left(\hat{y}^{c}\right)\right) \right),\] where \(\sigma(\cdot)\) is the sigmoid function. Our model learns the aggregated features by combining protein neighbors and models the label correlations by learning inter-dependent classifiers simultaneously to improve the model generalization. In multi-graph learning, the learned classifiers are expected to be neighborhood aware at both feature and label levels. ### Self-ensemble Graph Learning To leverage unlabeled data, we adopt the mean teaching architecture for unsupervised learning, as shown in Fig. 1. We construct a teacher network \(f_{t}\) with the same architecture as the student network \(f_{s}\) based on self-ensembling [13]. Specifically, in each training iteration k, we update the teacher model weights \(\theta^{\prime}\) with the exponential moving average (EMA) weights of the student model \(\theta\) by leveraging the momentum updating mechanism: \[\theta_{\text{k}}^{\prime}=m\theta_{\text{k}-1}^{\prime}+(1-m)\theta_{\text{k }}, \tag{6}\] where \(m\) is momentum. During training, the student model is encouraged to be consistent with the teacher predictions for the inputs with different augmentations. Because of the non-euclidean graph structure, image augmentations such as crop and rotation cannot be directly applied to graphs. To facilitate self-ensemble graph learning, we construct two graph data augmentation methods at both the edge and node levels, _i.e._, **Edge Manipulation** and **Node Manipulation** to augment graph topological and attribute information [20]. **Edge Manipulation (EM):** To improve the robustness against connectivity variations, we randomly replace a certain percentage of edges in the input to the student and teacher models, since some edges (PPIs) between different nodes (proteins) may be unidentified or wrong in experimental procedures. Specifically, we follow an i.i.d. uniform distribution to randomly replace \(em_{s}\%\) and \(em_{t}\%\) of edges in the input to the student and the teacher, respectively. Different from [20], we replace the dropped edge by linking the node with one of its neighbor's neighboring nodes for maintaining global structural information, _i.e._, node \(p_{s}\) with a dropped edge connecting to node \(p_{t}\) could be linked to \(p_{u}\in\{p_{u}|e_{ut}=1\}\). **Node Manipulation (EM):** To improve the robustness against attribute missing, we randomly remove \(nm_{s}\%\) and \(nm_{t}\%\) of node features, mask them with zeros and feed them into the student and teacher models respectively, to expect the model to effectively learn the features even in the presence of missing attribute information. We construct two graph views with augmentations above to feed the student and teacher networks separately, and encourage them to generate consistent predictions using \(\ell_{2}\) loss: \[\mathcal{L}_{con}=\|f_{t}(E_{u}|G,\theta_{k}^{\prime},\xi^{\prime})-f_{s}(E_{u }|G,\theta_{k},\xi)\|_{2}, \tag{7}\] where \(E_{u}\) is unlabeled PPIs in a batch. \(\xi^{\prime}\) and \(\xi\) are different augmentation operations. We randomly comprise the different augmentations in our experiments to avoid overfitting and improve model generalization. ### Graph Consistency Constraint The consistency regularization enforces instance-wise invariance on the prediction space towards different augmentations on the same input, describing the PPI interactions between samples. For the graph-based PPI prediction task, we also need to optimize the model in the feature space, as protein nodes in the testing set differ from the training set and PPI is performed as the relationships between proteins by feature representations extracted from neighboring proteins. Therefore, we model the fine-grained structural protein-protein relations in the feature embedding space [14]. We denote the features extracted from protein-graph encoding as \(z_{s}\) and \(z_{t}\) for the student and teacher networks, respectively. **Edge matching:** We construct the student embedding graph \(G_{e}^{t}\) and the teacher embedding graph \(G_{e}^{t}\) by calculating all pairwise Pearson's correlation coefficient (PCC) between nodes in the same batch. Then, we enforce the student network to encode consistent instance-wise correlations with the teacher network in the embedding feature space by applying the edge matching loss: \[\mathcal{L}_{edge}=||\text{Adj}(G_{e}^{s})-\text{Adj}(G_{e}^{t})||_{2}, \tag{8}\] where Adj refers to the adjacency matrix. **Node matching:** We further formulate the edge embedding graph \(G_{e}^{st}\) by calculating all pairwise PCC between student encoding \(z_{s}\) and teacher encoding \(z_{t}\) in the same batch. To explicitly align encoding of the same protein from the teacher and the student network, we design a node matching loss: \[\mathcal{L}_{node}=||\text{diag}(\text{Adj}(G_{e}^{st}))-\text{diag}(I)||_{2}, \tag{9}\] where diag is an operator to create a block-diagonal matrix with the off-diagonal elements of \(0\), and \(I\) refers to the identity matrix. In this regard, we jointly leverage labeled and unlabeled data with graph learning in both protein and label spaces and consistency regularization in both prediction and feature spaces for PPI prediction. The overall objective function is defined as: \[\mathcal{L}=\mathcal{L}_{sup}+\lambda_{con}\mathcal{L}_{con}+\lambda_{edge} \mathcal{L}_{edge}+\lambda_{node}\mathcal{L}_{node}, \tag{10}\] where \(\lambda_{con}\), \(\lambda_{edge}\) and \(\lambda_{node}\) are scaling factors for \(\mathcal{L}_{con}\), \(\mathcal{L}_{edge}\) and \(\mathcal{L}_{node}\), respectively. ## 4 Experiment ### Dataset We perform extensive experiments on three datasets, _i.e.,_ STRING, SHS148k, and SHS27k. First, we use the multi-label PPI data of Homo sapiens from the STRING database [22] for training and evaluation, including \(15,355\) proteins and \(593,397\) PPIs. The PPIs are annotated with \(7\) types, _i.e.,_ Activation, Binding, Catalysis, Expression, Inhibition, Post-translational modification (Ptmod), and Reaction. Each PPI is labeled with at least one of them. Moreover, we use two subsets of Homo sapiens PPIs from STRING, _i.e.,_ SHS27k, and SHS148k [20], to further validate the proposed approach. SHS27k contains \(1,690\) proteins and \(7,624\) PPIs, while SHS148k contains \(5,189\) proteins and \(44,488\) PPIs. ### Experimental Details **Experimental Settings.** We follow partition algorithms in GNN-PPI [17], including random, breath-first search (BFS), and depth-first search (DFS) to split the train-sets and testsets. For in-depth analysis, PPIs in the testset can be divided into **BS subset** (both proteins of the PPI are present in the labeled trainset), **ES subset** (either one protein of the PPI is present in the labeled trainset), and **NS subset** (neither of the proteins is present in the labeled trainset). The BFS and DFS partition schemes create more challenging paradigms than the random partitioning by including more ES and NS proteins in the testsets for the inter-novel protein interactions [17]. In fully supervised experiments, we select \(20\%\) of the whole dataset for testing using the partition schemes mentioned above and use the rest for training. To simulate the label scarcity scenario, we randomly select \(5\%\), \(10\%\), and \(20\%\) samples from the trainset as the labeled data while keeping the rest as the unlabeled data. To assess the generalization capacity of our method, we evaluate our method trained with one dataset on another dataset, _i.e._, a trainset-heterologous testset. **Evaluation Metrics.** We use the F1 score to evaluate the model performance for multi-label PPI prediction. The score is micro-averaged over all \(7\) classes. The means and variances of F1 scores over three repeated experiments are reported as results, formatted as \(\text{mean}_{\text{std}}\). **Model Training.** 1) _Base train_: We follow GNN-PPI [17] for protein-independent encoding to extract protein features from protein sequences as inputs to our framework. We initialize the multi-graph encoding network using the labeled data for \(300\) epochs with an initial learning rate of \(0.001\) and the Adam optimizer. 2) _Joint train:_ Then, we train the self-ensemble graph learning framework on both labeled and unlabeled trainsets for \(300\) epochs. For label graph construction, we select the binarization threshold \(\tau=0.05\) and the re-weighting factor \(p=0.25\). We randomly comprise the different manipulations in our experiments to avoid overfitting and improve model generalization during joint training. For manipulation ratios, we use higher ratios for the student inputs so that the student can better distill knowledge from the teacher during self-ensemble learning. More specifically, the edge manipulation ratios \(em_{s}\%\) and \(em_{t}\%\) are fixed at \(10\%\) and \(5\%\), respectively. The node manipulation rates \(nm_{s}\%\), and \(nm_{t}\%\) are set to \(10\%\) and \(5\%\), respectively. To scale the components of the loss function, we set the value of \(\lambda_{con}\), \(\lambda_{edge}\) and \(\lambda_{node}\) as \(0.02\), \(0.01\) and \(0.003\), respectively. More details are shown in Supplementary Material. \begin{table} \begin{tabular}{c l|c c c c c c|c c c c} \hline \hline \multicolumn{2}{c|}{Method} & \multicolumn{4}{c|}{SHS27k} & \multicolumn{4}{c}{SHS148k} & \multicolumn{4}{c}{STRING} \\ \cline{3-12} \multicolumn{2}{c|}{} & Random & DFS & BFS & Random & DFS & BFS & Random & DFS & BFS \\ \hline \multirow{2}{*}{ML} & RF & \(78.45_{0.88}\) & \(35.55_{22.2}\) & \(37.67_{1.57}\) & \(82.10_{0.20}\) & \(43.26_{3.43}\) & \(38.96_{19.44}\) & \(88.91_{0.08}\) & \(70.80_{0.45}\) & \(55.31_{1.02}\) \\ & LR & \(71.55_{0.93}\) & \(48.51_{1.87}\) & \(43.06_{0.55}\) & \(67.00_{0.07}\) & \(51.09_{0.29}\) & \(47.45_{12.6}\) & \(67.74_{0.16}\) & \(61.28_{0.23}\) & \(50.54_{20.0}\) \\ \hline \multirow{2}{*}{DL} & DDPPI & \(73.99_{0.94}\) & \(46.12_{0.32}\) & \(41.43_{0.56}\) & \(77.48_{0.39}\) & \(52.03_{1.5}\) & \(52.12_{0.79}\) & \(94.85_{0.13}\) & \(68.92_{0.68}\) & \(56.68_{10.44}\) \\ & DNN-PPI & \(77.89_{0.94}\) & \(54.34_{1.30}\) & \(48.90_{0.24}\) & \(88.49_{0.48}\) & \(58.42_{0.25}\) & \(57.40_{0.10}\) & \(83.80_{0.11}\) & \(64.94_{0.93}\) & \(53.05_{0.82}\) \\ & PIPR & \(83.31_{0.75}\) & \(57.80_{3.24}\) & \(44.48_{4.4}\) & \(90.05_{0.25}\) & \(63.98_{0.76}\) & \(61.83_{0.23}\) & \(94.43_{0.10}\) & \(67.45_{0.54}\) & \(55.65_{1.60}\) \\ \hline \multirow{2}{*}{Graph} & GNN-PPI & \(87.09_{0.93}\) & \(74.72_{0.26}\) & \(86.31_{0.19}\) & \(92.26_{0.10}\) & \(82.67_{0.58}\) & \(71.73_{0.35}\) & \(95.43_{0.10}\) & \(91.07_{0.05}\) & \(78.73_{0.50}\) \\ & GNN-PPI* & \(88.87_{0.23}\) & \(75.68_{0.35}\) & \(68.83_{0.16}\) & \(62.91_{0.30}\) & \(87.71_{0.34}\) & \(69.02_{0.37}\) & \(94.94_{0.17}\) & \(96.02_{0.23}\) & \(79.76_{0.43}\) \\ \hline M-Graph & SemiGNN-PPI & \(\mathbf{89.51_{0.46}}\) & \(\mathbf{78.32_{3.15}}\) & \(\mathbf{72.15_{2.87}}\) & \(\mathbf{92.40_{0.22}}\) & \(\mathbf{85.45_{1.17}}\) & \(\mathbf{71.78_{0.56}}\) & \(\mathbf{95.57_{0.08}}\) & \(\mathbf{91.23_{0.26}}\) & \(\mathbf{80.84_{0.05}}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Performance of SemiGNN-PPI and baseline methods for different datasets and data partition schemes. GNN-PPI: reported results in the original paper. GNN-PPI’: reproduced GNN-PPI results. The scores are presented in the format of \(\text{mean}_{\text{std}}\). **Baseline Methods.** We compare SemiGNN-PPI with several representative methods in PPI prediction, including: **Machine Learning (ML)** methods include RF [20] and LR [14], which take commonly handcrafted protein features including AC [1] and CTD [13] as inputs. **Deep Learning (DL)** approaches include DNN-PPI [11], PIPR [1], and GNN-PPI [15], which take amino acid sequence-based features as inputs (More details are illustrated in Appendix). It is noted that GNN-PPI adopts graph learning to leverage protein correlations, achieving state-of-the-art performance on multi-type PPI prediction. In this regard, we extensively compare our method with GNN-PPI in different scenarios and settings. ### Results and Analysis **Benchmark Analysis.** In Table 1, we compare our methods with other baseline methods under different partition schemes and various datasets. It is observed that graph-based methods, _i.e._, GNN-PPI and SemiGNN-PPI outperform other ML and DL methods, even under more challenging BFS and DFS partitions with more unseen proteins. It can be attributed to graph learning, which can better capture correlations between proteins despite the existence of more unknown proteins. Furthermore, our method incorporates multiple graphs (M-Graph) for feature learning, achieving state-of-the-art performance in multi-type PPI prediction. Especially, under challenging evaluations with small datasets, _e.g._, SHS27k-DFS, our method achieves much higher F1 scores than GNN-PPI, since self-ensemble graph learning can effectively improve the model robustness against complex scenarios. Moreover, the number of parameters is 1.09M (GNN-PPI) and 1.13M (ours), and the inference time on SHS27k is 0.050s (GNN-PPI) and 0.058s (ours). GNN-PPI and our method have comparable performance in the two metrics, showing the scalability of the proposed method. **Label Efficiency.** To demonstrate the feasibility of our method under the label scarcity scenario, we present experimental results under different label ratios in Table 2. We can see that GNN-PPI receives severe performance degradation with fewer labels. In comparison, our method achieves better performance under all scenarios with different datasets, label ratios, and partition schemes. Remarkably, our method under some scenarios, _e.g._, SHS148k-BFS-\(20\%\) can achieve comparable performance with GNN-PPI using \(100\%\) labeled data, indicating the annotation efficiency of our method. To further analyze the model performance on inter-novel-protein interaction prediction, we make an in-depth performance comparison between GNN-PPI and SemiGNN-PPI in the different subsets (BS/ES/NS) of the testset. As shown in Table 3, the BS subset comprises most of the whole testset under the random partition, which cannot reflect the prediction performance on the inter-novel-protein interactions. In contrast, the proportions of ES and NS subsets increase under label scarcity and other partition schemes; in these settings, SemiGNN-PPI consistently outperforms GNN-PPI in both ES and NS subsets by a large margin, which demonstrates the effectiveness of SemiGNN-PPI for inter-novel-protein interaction prediction. **Performance on Different PPI Types.** To study the per-class prediction performance, we present the model performance on different PPI types with corresponding type ratios in Table 4. It is observed that the PPI types are unbalanced with some under-represented types, such as Ptmod, Inhibi \begin{table} \begin{tabular}{c|c c c c|c c c|c c c|c c} \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{STRING} & \multicolumn{3}{c|}{SHS148k} & \multicolumn{3}{c|}{SHS27k} \\ & \multirow{2}{*}{5\%} & \multirow{2}{*}{10\%} & \multirow{2}{*}{20\%} & \multirow{2}{*}{100\%} & \multirow{2}{*}{5\%} & \multirow{2}{*}{10\%} & \multirow{2}{*}{20\%} & \multirow{2}{*}{100\%} & \multirow{2}{*}{5\%} & \multirow{2}{*}{10\%} & \multirow{2}{*}{20\%} & \multirow{2}{*}{100\%} \\ \hline \hline \multicolumn{11}{c}{Partition Scheme = Random} \\ \hline GNN-PPI & \(89.94_{0.2}\) & \(92.38_{0.51}\) & \(93.30_{0.56}\) & \(94.94_{1.17}\) & \(79.19_{0.57}\) & \(82.80_{0.59}\) & \(86.67_{0.22}\) & \(92.13_{0.10}\) & \(52.04_{3.42}\) & \(60.28_{12.26}\) & \(79.44_{1.19}\) & \(88.87_{0.23}\) \\ \hline Ours & \(\mathbf{90.50_{0.10}}\) & \(\mathbf{92.66_{0.29}}\) & \(\mathbf{93.90_{0.41}}\) & \(\mathbf{95.57_{0.76}}\) & \(\mathbf{79.50_{0.51}}\) & \(\mathbf{83.48_{0.36}}\) & \(\mathbf{87.38_{0.24}}\) & \(\mathbf{92.40_{0.22}}\) & \(\mathbf{57.97_{1.1a}}\) & \(\mathbf{62.67_{12.26}}\) & \(\mathbf{81.01_{0.47}}\) & \(\mathbf{89.51_{0.46}}\) \\ \hline \hline \multicolumn{11}{c}{Partition Scheme = DFS} \\ \hline GNN-PPI & \(86.00_{0.37}\) & \(87.91_{0.30}\) & \(89.42_{0.46}\) & \(96.02_{0.23}\) & \(68.77_{11.7a}\) & \(78.36_{0.23}\) & \(80.96_{0.61}\) & \(83.77_{1.34}\) & \(53.41_{1.64}\) & \(58.43_{2.37}\) & \(65.73_{1.48}\) & \(75.08_{0.53}\) \\ Ours & \(\mathbf{87.54_{0.08}}\) & \(\mathbf{88.98_{0.26}}\) & \(\mathbf{90.23_{0.12}}\) & \(\mathbf{91.23_{0.26}}\) & \(\mathbf{69.94_{0.57}}\) & \(\mathbf{81.12_{0.08}}\) & \(\mathbf{83.63_{0.86}}\) & \(\mathbf{85.45_{1.17}}\) & \(\mathbf{58.48_{1.11}}\) & \(\mathbf{61.18_{0.78}}\) & \(\mathbf{70.31_{2.38}}\) & \(\mathbf{78.32_{3.15}}\) \\ \hline \hline \multicolumn{11}{c}{Partition Scheme = BFS} \\ \hline GNN-PPI & \(71.35_{4.87}\) & \(74.94_{2.43}\) & \(79.99_{2.75}\) & \(79.76_{2.43}\) & \(61.42_{3.92}\) & \(62.51_{3.37}\) & \(67.10_{3.48}\) & \(69.02_{0.37}\) & \(57.93_{1.11}\) & \(56.84_{12.19}\) & \(61.18_{0.58}\) & \(68.54_{3.16}\) \\ \hline Ours & \(\mathbf{73.35_{4.00}}\) & \(\mathbf{76.94_{2.53}}\) & \(\mathbf{81.39_{4.44}}\) & \(\mathbf{80.84_{2.05}}\) & \(\mathbf{64.86_{2.67}}\) & \(\mathbf{76.16_{12.62}}\) & \(\mathbf{71.70_{3.35}}\) & \(\mathbf{71.78_{56}}\) & \(\mathbf{60.15_{2.00}}\) & \(\mathbf{66.13_{2.00}}\) & \(\mathbf{67.69_{47}}\) & \(\mathbf{72.15_{2.87}}\) \\ \hline \end{tabular} \end{table} Table 2: Performance comparison of different methods under different label ratios. The scores are presented in the format of \(\mathrm{mean}_{\mathrm{std}}\). \begin{table} \begin{tabular}{c|c|c c c|c c|c c} \hline \multirow{2}{*}{Method} & \% Labels & \multicolumn{3}{c|}{Random Partition} & \multicolumn{3}{c|}{DFS Partition} & \multicolumn{3}{c}{BFS Partition} \\ \hline \multirow{3}{*}{GNN-PPI} & \multirow{3}{*}{100} & BS (92.66\%) & ES (6.95\%) & NS(0.39\%) & ES (75.93\%) & NS(24.05\%) & ES (85.70\%) & NS(14.30\%) \\ \cline{3-10} & & 89.17 & 72.44 & 50.00 & 77.81 & 63.44 & 71.03 & 44.80 \\ \cline{3-10} & & **89.68** & **72.93** & 50.00 & **81.75** & **66.32** & **75.14** & **57.00** \\ \hline \multirow{3}{*}{GNN-PPI} & \multirow{3}{*}{20} & BS (73.18\%) & ES (52.43\%) & ss (52.15\%) & ES (72.78\%) & NS (52.77\%) & NS (52.47\%) & NS (52.29\%) \\ \cline{3-10} & & 83.46 & 70.10 & 43.68 & 64.40 & 54.21 & **59.04** & 66.33 \\ \cline{3-10} & & **84.09** & **71.95** & **45.78** & **73.30** & **55.46** & 58.10 & **73.82** \\ \hline \multirow{3}{*}{GNN-PPI} & \multirow{3}{*}{10} tion, and Expression. Nevertheless, SemiGNN-PPI outperforms GNN-PPI on most PPI types, especially for relatively imbalanced types (82.99 vs. 77.94 in Ptmod-DFS and 67.71 vs. 60.20 in Inhibition-BFS). It is noted that lower performance is achieved with our method in the type Expression, which could be due to inaccurate label correlations captured with extremely low co-occurrence with other labels, which is still a direction to explore in our future work. **Model Generalization.** To access the generalization capability of the proposed method, we test the model trained using small datasets, _e.g.,_ SHS27k on big datasets, _e.g.,_ STRING in three evaluation settings: 1) _Domain Generalization (DG)_: The model is directly tested on the unseen dataset. 2) _Inductive Domain Adaptation (IDA)_: The model has access to unlabeled training data in the trainset-heterologous dataset during training. 3) _Transductive domain adaptation_: The model has access to the whole unlabeled trainset-heterologous dataset during training. In Fig. 2, we can observe that our method outperforms GNN-PPI in all partition schemes when tested on unseen datasets. Moreover, our model can effectively leverage unlabeled data, achieving better adaptation performance in both inductive and transductive setups. **Ablation Study.** We investigate the effectiveness of different components in SemiGNN-PPI in Fig. 3. We can see that all components, _i.e.,_ label graph encoding (LGE), self-ensemble (SE), and graph consistency constraint (GCC) positively contribute to the performance improvements. It is noted that too few labels _e.g.,_\(10\%\) may influence model initialization, limiting self-ensemble graph learning, while the performance gains are more evenly distributed among various components with \(10\%\) or more labeled data. Particularly, the proposed GCC can further enhance the results from the self-ensemble by providing stronger regularization in the feature space. Moreover, We have performed one experiment for each augmentation strategy (F1-score) on SHS27k (20% labeled) under random partition, _i.e.,_ random edge dropout (78.92), random node dropout (79.04), centrality-based [15] node and edge manipulation (81.08), and ours (81.11), which show that our strategy is comparable with centrality-based manipulation and outperforms others. ## 5 Conclusion In this paper, We propose a novel self-ensembling multi-graph neural network (SemiGNN-PPI) for efficient and generalizable multi-type PPI prediction, which models both protein correlations and label dependencies by constructing and processing graphs at protein and label levels. To leverage unlabeled PPI data, We integrate GNN into Mean Teacher for self-ensemble graph learning, in which multiple graph consistency constraints are designed to align the teacher and student graphs in the feature embedding space for optimized consistency regularization. Extensive experiments have demonstrated the superiority in model performance, label efficiency and generalization ability of SemiGNN-PPI over state-of-the-art methods by large margins. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{3}{c}{Random Partition} & \multicolumn{3}{c}{DFS Partition} & \multicolumn{3}{c}{BFS Partition} \\ \cline{3-7} PPI Type & Type Ratio & GNN-PPI & SemiGNN-PPI & GNN-PPI & SemiGNN-PPI & GNN-PPI & SemiGNN-PPI \\ \hline Reaction & \(40.61\%\) & \(89.56\)\({}_{0.15}\) & \(\mathbf{90.16}_{0.45}\) & \(81.90\)\({}_{0.15}\) & \(\mathbf{85.86}_{0.71}\) & \(61.62\)\({}_{1.29}\) & \(\mathbf{64.92}_{5.73}\) \\ Binding & \(52.71\%\) & \(88.28\)\({}_{0.48}\) & \(\mathbf{89.46}_{0.57}\) & \(83.52\)\({}_{1.41}\) & \(\mathbf{86.39}_{0.67}\) & \(70.00\)\({}_{1.10}\) & \(\mathbf{72.43}_{6.33}\) \\ Pmod & \(20.99\%\) & \(87.04\)\({}_{0.29}\) & \(\mathbf{87.42}_{0.33}\) & \(77.94\)\({}_{0.47}\) & \(\mathbf{82.99}_{1.44}\) & \(65.92\)\({}_{5.52}\) & \(\mathbf{71.32}_{5.94}\) \\ Activation & \(42.51\%\) & \(85.15\)\({}_{0.38}\) & \(\mathbf{85.26}_{0.46}\) & \(73.48\)\({}_{7.47}\) & \(\mathbf{77.95}_{1.19}\) & \(67.44\)\({}_{43.43}\) & \(\mathbf{68.04}_{0.06}\) \\ Inhibition & \(20.20\%\) & \(87.21\)\({}_{0.18}\) & \(\mathbf{88.09}_{0.31}\) & \(72.46\)\({}_{1.11}\) & \(\mathbf{78.12}_{0.62}\) & \(60.20\)\({}_{4.62}\) & \(\mathbf{67.71}_{2.21}\) \\ Catalysis & \(44.67\%\) & \(89.36\)\({}_{0.44}\) & \(\mathbf{90.35}_{0.31}\) & \(82.30\)\({}_{0.30}\) & \(85.77\)\({}_{1.29}\) & \(65.70\)\({}_{4.62}\) & \(73.39\)\({}_{0.33}\) \\ Expression & \(7.09\%\) & \(\mathbf{47.85}_{0.79}\) & \(66.99\)\({}_{0.32}\) & \(\mathbf{34.96}_{0.74}\) & \(32.45\)\({}_{0.50}\) & \(\mathbf{31.81}_{0.87}\) & \(28.99\)\({}_{0.90}\) \\ \hline Macro-Average & - & \(82.07\)\({}_{0.39}\) & \(\mathbf{82.53}_{0.38}\) & \(72.37\)\({}_{1.87}\) & \(\mathbf{74.16}_{0.09}\) & \(60.38\)\({}_{0.03}\) & \(\mathbf{63.29}_{5.20}\) \\ Micro-Average & - & \(86.67\)\({}_{0.22}\) & \(\mathbf{87.38}_{0.24}\) & \(80.96\)\({}_{0.61}\) & \(\mathbf{83.63}_{0.86}\) & \(67.10\)\({}_{3.48}\) & \(\mathbf{71.06}_{3.35}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Per-class results in the SHS148k dataset with \(20\%\) training labels. The type ratios are calculated over the whole dataset. Figure 3: Results of ablation studies on different components of SemiGNN-PPI using the SHS27k dataset. Figure 2: Performance comparison on trainset-heterologous test-sets. DG: domain generalization. IDA: inductive domain adaptation. TDA: transductive domain adaptation. ## Acknowledgements This research was funded by Competitive Research Programme "NRF-CRP22-2019-0003", National Research Foundation Singapore, and partially supported by A*STAR core funding.
2302.00921
Ring Artifact Correction in Photon-Counting Spectral CT Using a Convolutional Neural Network With Spectral Loss
Photon-counting spectral computed tomography is now clinically available. These new detectors come with the promise of higher contrast-to-noise ratio and spatial resolution and improved low-dose imaging. However, one important design consideration is to build detector elements that are sufficiently homogeneous. In practice, there will always be a degree of inhomogeneity in the detector elements, which will materialize as variations in the energy bin thresholds. Without proper detector calibration, this will lead to streak artifacts in the sinograms and corresponding ring artifacts in the reconstructed images, which limit their clinical usefulness. Since detector calibration is a time-consuming process, having access to a fast ring correction technique may greatly improve workflow. In this paper, we propose a deep learning-based post-processing technique for ring artifact correction in photon-counting spectral CT. We train a UNet with a custom loss to correct for ring artifacts in the material basis images. Our proposed loss is made ``task-aware'' by explicitly incorporating the fact that we are working with spectral CT by combining a L1-loss operating on the material basis images with a perceptual loss, using VGG16 as feature extractor, operating on 70 keV virtual monoenergetic images. Our results indicate that using this novel loss greatly improves performance. We demonstrate that our proposed method can successfully produce ring corrected 40, 70, and 100 keV virtual monoenergetic images.
Dennis Hein, Konstantinos Liappis, Fredrik Grönberg, Alma Eguizabal, Mats Persson
2023-02-02T07:50:18Z
http://arxiv.org/abs/2302.00921v1
Ring Artifact Correction in Photon-Counting Spectral CT Using a Convolutional Neural Network With Spectral Loss ###### Abstract Photon-counting spectral computed tomography is now clinically available. These new detectors come with the promise of higher contrast-to-noise ratio and spatial resolution and improved low-dose imaging. However, one important design consideration is to build detector elements that are sufficiently homogeneous. In practice, there will always be a degree of inhomogeneity in the detector elements, which will materialize as variations in the energy bin thresholds. Without proper detector calibration, this will lead to streak artifacts in the sinograms and corresponding ring artifacts in the reconstructed images, which limit their clinical usefulness. Since detector calibration is a time-consuming process, having access to a fast ring correction technique may greatly improve workflow. In this paper, we propose a deep learning-based post-processing technique for ring artifact correction in photon-counting spectral CT. We train a UNet with a custom loss to correct for ring artifacts in the material basis images. Our proposed loss is made "task-aware" by explicitly incorporating the fact that we are working with spectral CT by combining a L1-loss operating on the material basis images with a perceptual loss, using VGG16 as feature extractor, operating on 70 keV virtual monoenergetic images. Our results indicate that using this novel loss greatly improves performance. We demonstrate that our proposed method can successfully produce ring corrected 40, 70, and 100 keV virtual monoenergetic images. ## 1 Introduction X-ray computed tomography (CT) imaging is a widely used medical imaging modality providing healthcare with an important tool of diagnosis and treatment planning for a wide range of diseases such as stroke, cancer, and cardiovascular disease. The next generation X-ray CT scanners, based on photon-counting detectors, are now clinically available [1, 2]. Advantages of photon-counting detectors, compared to standard energy-integrating detectors, include higher contrast-to-noise ratio and spatial resolution, improved low-dose imaging, and better characterization of tissue composition [3, 4, 5]. One important challenge in developing photon-counting detectors is to build detector elements that are sufficiently homogeneous [6]. Inhomogeneity in the detector elements will result in variations in the energy bin thresholds. Due to these variations, photons of a given energy may be registered in different bins in different detector elements. Hence, the detector elements miscount the number of photons. This leads to streak artifacts in the sinogram domain and, after tomographic reconstruction, corresponding ring artifacts in the image domain. These rings are very conspicuous and limit the clinical usefulness of the images. The generation of ring artifacts can be avoided through careful detector calibration. However, thorough calibration is a time consuming process. Thus, having a quick-to-apply method for ring correction may significantly improve workflow. Ring artifact reduction methods for X-ray CT can broadly be categorized into pre- and post-processing methods. Pre-processing methods operate in the sinogram domain and post-processing methods in the image domain. Some conventional ring artifact correction pre-processing methods included flat-field correction [7] and various filtering methods [8, 9, 10, 11]. Examples of post-processing methods include filtering-based methods such as [12, 13] and variation-based methods such as [14]. Shifting from a Cartesian to a polar coordinate system will transform the rings in the reconstructed images into streaks. Since streaks might be easier to detect and correct for, many post-processing methods first shift to polar coordinates. Machine learning, and in particular deep learning, is increasingly playing an important role in medical imaging [15; 16]. A lot of this work focus on the problem of image denoising. In [17], the authors use a generative adversarial network (GAN) [18] to map lose-dose CT images to their normal dose counterpart. The GAN setup will encourage the distributions of the low-dose and normal-dose images to be similar. The authors argue that adding the adversarial loss prevents aggressive denosing and helps preserve fine details in the processed image. [19] is very similar to [17] but they substitute the L2-loss with a perceptual loss [20], and the GAN with the Wasserstein GAN (WGAN) [21] with gradient penalty [22]. Using a perceptual loss, which compares features extracted from the output and target image using a pretrained convolutional neural network, can help prevent over-smoothing and produce processed images of higher perceptual quality [23; 20]. [24] investigate the effect of different loss configurations for the low-dose CT image denoising problem. Their results indicate that using a perceptual loss is superior to pixel-wise loss functions and that adding an additional adversarial loss may further improve the noise and signal transfer properties of the network. Deep learning has also successfully been used for artifact correction and joint artifact and noise correction. Examples of deep learning-based ring artifact correction in the sinogram domain include [25] and [26]. However, great care must be taken when processing data in the sinogram domain to ensure that no artifacts are induced in the reconstructed images as tiny, seemingly unimportant, mistakes in the sinogram can result in large reconstruction errors. For post-processing methods, as with conventional techniques, it is common to first transform the image from a Cartesian to a polar coordinate system. This is done in [27] where they, inspired by the super-resolution generative adversarial network (SRGAN) [23], train a deep residual neural network using a perceptual, an adversarial, and an unidirectional relative total variation loss. In [28] the authors work in the image domain directly, without transforming to polar coordinates, and show that it is beneficial to train on random patches extracted from the images rather than the images themselves. Instead of focusing on one single domain, one may include information from both the sinogram and image domain in the ring correction pipeline. This is done in, for instance, [29] and [30]. In [31] the authors combine ring correction with the problem of image denoising in photon-counting spectral CT. In particular, they train a convolution residual network using a L2-loss to correct for noise and ring artifacts in reconstructed energy bin images. In this paper, we add to this literature by developing a post-processing technique using deep learning for ring artifact correction in photon-counting _spectral_ CT. In particular, we train a 2D UNet using a custom spectral loss to correct for rings in the material basis images. The spectral loss combines a L1-loss operating on the material basis images and a perceptual loss operating on \(70\mathrm{keV}\) virtual monoenergetic images. The rest of the paper is organized as follows. First, we give a brief treatment of how we simulated photon-counting CT data. Second, the key building blocks from the deep learning literature used in this paper are presented. Third, we present our qualitative and quantitative results. Finally, we discuss our results and suggest future avenues of research. ## 2 Methods ### Photon-counting spectral CT #### 2.1.1 Material decomposition Consider a system with \(B>2\) energy bins and, for simplicity, a 2-dimensional image space. Generating CT images from the measured counts in a photon-counting detector can be done in several ways. Here, we consider the projection-based two step approach. The first step is to map the measured counts to the basis material line integrals. We solve this by making the ansatz that the X-ray linear attenuation coefficient \(\mu(x,y;E)\) can be approximated by a linear combination of \(K\) basis materials \[\mu(x,y;E)\approx\sum_{k=1}^{K}a_{k}(x,y)\tau_{k}(E), \tag{1}\] where \(a_{k}\) and \(\tau_{k}(E)\) denote the basis coefficients and basis functions, respectively. The projection-based approach resolves the material basis decomposition in the sinogram domain and thus our target variables are the material line integrals \[A_{k}(\ell)=\int_{\ell}a_{k}(x,y)ds=\mathcal{R}(a_{k}), \tag{2}\] where \(\mathcal{R}\) denotes the Radon transform operator. The forward model is the polychromatic Beer-Lambert law, which relates the expected number of photons in bin \(j\) to the material line integrals, \[\lambda_{j}(\mathbf{A})=\int_{0}^{\infty}\omega_{j}(E)\exp\left(-\sum_{k=1}^{K}A_{ m}\tau_{k}(E)\right)dE, \tag{3}\] where \(\omega_{j}(E)\) models the joint effect of the detector efficiency, the X-ray source, and the energy response from bin \(j\)[3]. We stack the measured counts in the \(B\) energy bins into the vector \(\mathbf{y}:=[y_{1},...,y_{B}]\). We will assume that \(y_{j}\) are independent Poisson random variables, that is, for each \(j\) \[y_{j}\sim\mathrm{Poisson}(\lambda_{j}(\mathbf{A})). \tag{4}\] Thus, the material decomposition can be formulated as the, non-linear, inverse problem of mapping the measured photon counts \(\mathbf{y}\) to the material line integrals \(\mathbf{A}:=[A_{1},...,A_{K}]\). The most common approach to this problem is the maximum likelihood method[32, 33, 34]. More formally, setting up the likelihood and simplifying yields the following program \[\begin{split}\underset{\mathbf{\mathrm{A}}}{\min}& \sum_{j=1}^{B}\left(\lambda_{j}(\mathbf{A})-y_{j}\log(\lambda_{j}(\mathbf{A})) \right)\\ \mathrm{s.t.}& A_{k}\geq 0\quad\forall k=1,...,K, \end{split} \tag{5}\] which is usually solved using some iterative optimization algorithm such as the logarithmic barrier method [35]. The second step of this image reconstruction chain is tomographic reconstruction, for which we use Filtered Back Projection (FBP). #### 2.1.2 Simulation The generation of photon-counting data is similar to the approach taken in [36]. First, a dataset of numerical basis phantoms was created by thresholding CT images from the KiTS19 [37] and NSCLC [38] datasets. Let \(\mathbf{z}\in\mathbb{R}^{N\times N}\) denote the upsampled1 CT image in Hounsfield units (HU) and \(\tau\in\mathbb{R}\) a threshold in HU for bone. To avoid the issue of differentiating bone from contrast agent, contrast-enhanced images were excluded. Bone phantoms \(\mathbf{z}_{i,j}^{\mathrm{bone}}\) are defined as \(\mathbf{z}_{i,j}\) for \(\mathbf{z}_{i,j}>\tau\) and \(0\) for \(\mathbf{z}_{i,j}\leq\tau\) and soft tissue phantoms \(\mathbf{z}_{i,j}^{\mathrm{soft}}\) as \(\mathbf{z}_{i,j}\) for \(\mathbf{z}_{i,j}\leq\tau\) and \(0\) for \(\mathbf{z}_{i,j}>\tau\). We found that \(\tau=200\) gave reasonable results. The next step was to transform the basis phantoms into relative unitless terms. We did this by adding and dividing by 1000 HU. Finally, the basis phantoms were weighted by their relative linear attenuation coefficients at 70 \(\mathrm{keV}\)\(\mu^{\mathrm{water}}/\mu^{\mathrm{soft}}\) and \(\mu^{\mathrm{water}}/\mu^{\mathrm{bone}}\), respectively. Footnote 1: We use bilinear upsampling to transform the original \(512\times 512\) images to \(1024\times 1024\). Second, after generating the numerical basis phantoms, we simulated photon-counting images by using the fanbeam function in Matlab and a spectral response model of a photon-counting silicon detector [39] with \(0.5\times 0.5\ \mathrm{mm}^{2}\) pixels for 120 \(\mathrm{k}\mathrm{V}\mathrm{p}\) and 200 \(\mathrm{m}\mathrm{A}\mathrm{s}\) with 2000 detector pixels and 2000 view angles. We subsequently simulated Poisson noise and used the maximum likelihood method to decompose the simulated energy bin sinograms into soft tissue and bone material basis sinograms. To simulate detector inhomogeneity, we model threshold variations in the simulation by applying a random threshold shift (\(\sigma=0.5\ \mathrm{keV}\)) applied independently to each of the eight energy bin thresholds of each detector pixel and subsequently two material decompositions were performed: one with the thresholds used in the simulation, including the random shift, and one with the nominal bin thresholds. This latter configuration yields sinograms with streak artifacts and, after reconstruction, images with ring artifacts. Finally, images were reconstructed on a \(1024\times 1024\) pixel grid using FBP. The data simulation pipeline is illustrated in Fig. 1. ### Deep learning #### 2.2.1 Problem statement This paper proposes an image processing technique for ring artifact correction in photon-counting spectral CT using deep neural networks. For \(K\) basis materials and a \(H\times W\) pixel grid, we formulate this problem as learning the map \[f:\mathbf{x}\rightarrow\mathbf{y}, \tag{6}\] where \(\mathbf{x}\in\mathbb{R}^{K\times H\times W}\) denotes the ring corrupted material basis images and \(\mathbf{y}\in\mathbb{R}^{K\times H\times W}\) their artifact free counterpart. We let \(f\) be a convolutional neural network (CNN) parameterized by \(\theta\) and learn the map (6) by learning parameters \(\theta\). #### 2.2.2 Loss function Common loss functions in biomedical imaging, and deep learning more generally, include L2-loss, L1-loss, and perceptual loss. The L1-loss \[\ell_{1}(f)=\frac{1}{KHW}||f(\mathbf{x})-\mathbf{y}||_{1} \tag{7}\] and the L2-loss \[\ell_{2}(f)=\frac{1}{KHW}||f(\mathbf{x})-\mathbf{y}||_{2}^{2} \tag{8}\] both compare output and target pixel-by-pixel. This low-level per-pixel comparison is known to lead to over-smoothing and a loss of fine-grained details that are important to the perceptual quality of the image [23, 20]. To prevent these types of issues, one may instead employ a so-called perceptual loss function [20] which considers differences in high level feature representations rather than pixel-by-pixel. These feature representations are usually extracted from the output and target images using a pretrained CNN as feature extractor, or loss network. In this paper we use VGG16 [40] pretrained on ImageNet [41] as feature extractor. VGG16 was trained on RGB images and it accepts input with three channels (\(C=3\)). For \(C=1\) we simply duplicate the main channel three times before passing it to the VGG network. For the \(j\)-th layer of VGG16, let this map be denoted \(\phi_{j}\). Then the perceptual loss is defined as \[\ell_{VGG}(f)=\frac{1}{C_{j}H_{j}W_{j}}||\phi_{j}(f(\mathbf{x}))-\phi_{j}(\mathbf{y})|| _{2}^{2}, \tag{9}\] where \(C_{j}\) is the number of channels in layer \(j\). For \(C=2\) we apply this map to each basis image and subsequently concatenate the output. We found that \(j=9\) yields good results and will use this throughout2. The perceptual loss correlates well with the, somewhat ambiguously defined, perceptual quality of the resulting image. Humans also extract and compare salient features from images [42], rather than comparing pixel-per-pixel. Hence, the perceptual loss will align better with human perception than a pixel-wise loss. For instance, if two images are exactly the same save for one pixel which has a very large absolute error, then the L1- and L2-loss, would be large despite the fact that the images would essentially be perceived as the same to the human eye. Thus, we can think of the perceptual loss akin to a mathematical observer, it is a way to formally quantify the discrepancy between two images in a way that closely resembles human vision. At an early stage of this project we noticed that although most approaches were able to correct for the rings in the material basis images such that they were no longer visible, when forming virtual monoenergetic images these rings reemerged for certain energy levels. Correcting for the rings such that they are no longer directly visible in the 70 \(\mathrm{keV}\) virtual monoenergetic images proved the most difficult. These images have the lowest relative noise level, making the ring artifacts more discernable. To ensure good performance even at this energy level, we consider a "task-aware" reconstruction loss that explicitly incorporates the fact that we are working with spectral CT by combining a L1-loss operating on the material basis images with a perceptual loss operating on 70 \(\mathrm{keV}\) virtual monoenergetic images. More formally, for \(K\) basis materials, our spectral loss is defined as Footnote 2: Note that this layer is denoted “relu2_2” in [20]. \[\ell_{\mathrm{VGG}_{70}\text{-}\ell_{1}}(f):=\omega_{1}\ell_{\mathrm{VGG}_{70 }}(f)+\omega_{2}\ell_{1}(f), \tag{10}\] where \[\ell_{\mathrm{VGG}_{70}}(f):=\ell_{\mathrm{VGG}}\left(\sum_{k=1}^{K}\mu_{k}f_ {k}(\mathbf{x}),\sum_{k=1}^{K}\mu_{k}\mathbf{y}_{k}\right), \tag{11}\] is the perceptual loss operating on virtual monoenergetic images, \(\mu_{k}\) is the linear attenuation coefficient for basis image \(k\) at 70 \(\mathrm{keV}\), and \(\omega_{1},\omega_{2}\) are hyperparameters used to trade-off the two objectives. The intuition is that it will work Figure 1: Overview of data generation pipeline. Here MD* denotes material decomposition using the energy bin thresholds from the simulation, including random threshold shifts, and MD material decomposition with the nominal bin thresholds. similarly to introducing a prior: we are, to anthropomorphize, telling the network that we are in the end interested in virtual monoenergetic images despite working on material basis images. As shown below, this spectral reconstruction loss vastly improves the performance of the network. We denote the perceptual loss operating on the basis images VGG and the perceptual loss operating on virtual monoenergetic images defined in (11) \(\mathrm{VGG}_{70}\). The corresponding two loss configurations combining the perceptual loss with the L1-loss will be denoted VGG-L1 and \(\mathrm{VGG}_{70}\)-L1, respectively. #### 2.2.3 Network architecture We use a modified version of the UNet [43], which was originally developed for image segmentation tasks in biomedical imaging. Our implementation was inspired by [44] and [24]. The key differences from the original UNet are the skip connection from input to output and that the up-convolutions have been replaced with bilinear upsampling to avoid checkerboard artifacts [45]. Note by including this skip connection we repurpose the actual network to learn the residual map. The version of UNet used in this paper is illustrated in Fig. 2. ### Data A total of 2250 slices from 90 patients in the KiTS19 dataset, which contains chest and abdomen scans, were simulated and split 70/20/10 into train, validation and test sets. This split was conducted such that slices from any given patient ended up in only one of the three datasets. To evaluate the generalizability of the network, we simulated an additional 450 slices from the NSCLC dataset, which contains full-body scans. Evaluating on this additional dataset gives us an indication of whether the network has learned something general that can be applied successfully to datasets with different anatomies. ### Training details Each loss configuration is trained using Adam [46] with \(\beta_{1}=0.5,\beta_{2}=0.9\) and learning rate \(\alpha=10^{-4}\) for 100 epochs on a NVIDIA A100-SXM4-40GB GPU. \(\omega_{1}\) and \(\omega_{2}\) were tuned to get a good trade-off between the different objectives. This was achieved by setting \(\omega_{1}=10\) and \(\omega_{2}=1\) for \(\mathrm{VGG}_{70}\)-L1. Note that these values will put the two terms in the spectral loss on approximately the same scale. Since the scale of VGG and \(\mathrm{VGG}_{70}\) are different, we opted to use \(\omega_{1}=1\) and \(\omega_{2}=10\) for VGG-L1 as this, again, puts the two objectives on approximately the same scale. For all loss configurations we use batch size \(2\). The network is trained on randomly extracted \(512\times 512\) patches. Note that by extracting a patch of size \(512\times 512\) from images that are \(1024\times 1024\) we are ensured to always include the center of the image. Hence, each patch will include full rings with a high probability. Training on randomly extracted patches will serve as a regularizer and may thus help prevent overfitting. As an added bonus, training on patches will drastically reduce the graphics memory requirements and speed up training. The network trained with the L2-loss is included as a baseline. The remaining loss configurations are included such that we can conduct an ablation study of our proposed spectral loss: only L1, only VGG but not \(\mathrm{VGG}_{70}\), only \(\mathrm{VGG}_{70}\), VGG-L1 but not Figure 2: Illustration of the version of UNet used in this paper. \(\mathrm{VGG}_{70}\), and our full setup \(\mathrm{VGG}_{70}\)-L1. You can find the code used for training and evaluation in the following repo: [https://github.com/KTH-Physics-of-Medical-Imaging/deep_spectral_ring](https://github.com/KTH-Physics-of-Medical-Imaging/deep_spectral_ring). ## 3 Results ### Qualitative results Qualitative results are available in Fig. 3-8 where we show 40, 70, and 100 \(\mathrm{keV}\) virtual monoenergetic images from the KiTS19 and the NSCLC test sets. Despite the fact that the network is operating on a pair of material basis images, we show virtual monoenergetic images. This is mainly due to the fact that the rings are correlated between the two material basis images. Hence, although the rings may no longer be visible in the ring corrected basis images, it is possible that the rings reemerge for certain linear combinations considered when forming virtual monoenergetic images. In addition, and perhaps more importantly, the virtual monoenergetic images are often what is in the end of interest to the radiologist. In Fig. 3 we have formed 40 \(\mathrm{keV}\) virtual monoenergetic images from an example slice in the KiTS19 test set. This figure contains the ring artifact free (truth), ring corrupted (observed), and processed (predicted) images for each loss configuration considered. For each, we plot a magnification of the center region of interest (ROI) in the upper right corner to emphasize details. We can see that all loss configurations except \(\mathrm{VGG}_{70}\) have done a good job correcting for the rings at this energy level. That \(\mathrm{VGG}_{70}\) fails at 40 \(\mathrm{keV}\) is rather expected since this loss function is completely agnostic to the spectral nature of the problem and only "sees" 70 \(\mathrm{keV}\) virtual monoenergetic images. In Fig. 4 we show the 70 \(\mathrm{keV}\) virtual monoenergetic image. Comparing to observed, we can see that all loss configurations have reduced the degree of ring artifacts. However, only the loss configurations including \(\mathrm{VGG}_{70}\) adequately correct for the ring artifacts such that they are no longer obviously visible. In addition, we note that the processed images for \(\mathrm{VGG}_{70}\) and \(\mathrm{VGG}_{70}\)-L1 seem to accurately reproduce truth and preserve fine details. This is particularly evident when comparing the magnified version of the center ROI. The ability to faithfully reproduce truth is explored further below by considering the statistical properties of the indicated red ROIs in addition to considering profile plots. Despite doing a good job correcting for the rings, some residual rings are still visible. However, they now appear more as thicker, band-like, partial circles. We have annotated the processed image for \(\mathrm{VGG}_{70}\) and \(\mathrm{VGG}_{70}\)-L1 in Fig. 4 with red arrows to draw the reader's attention to these remaining artifacts. We can also note that these artifacts are slightly stronger for \(\mathrm{VGG}_{70}\)-L1 than for \(\mathrm{VGG}_{70}\). Finally, for the KiTS19 example, we have Fig. 5, where we show the results for the 100 \(\mathrm{keV}\) virtual monoenergetic image. The results closely parallel that of the 70 \(\mathrm{keV}\) case. For all loss configurations, save \(\mathrm{VGG}_{70}\), there is a clear reduction in the ring artifacts. However, in contrast to the 70 \(\mathrm{keV}\) case, it is now only \(\mathrm{VGG}_{70}\)-L1 that suppresses the rings sufficiently such that they are no longer visible. \(\mathrm{VGG}_{70}\) fails here for the same reason as in the 40 \(\mathrm{keV}\) case. In Fig. 6-8 we can see that the network is doing a good job generalizing from chest and abdomen scans to a head scan from the NSCLC test set. The results in Fig. 6-8 closely parallels what we saw for the KiTS19 example. All loss configurations, save \(\mathrm{VGG}_{70}\), adequately correct for the rings such that they are no longer visible in the 40 \(\mathrm{keV}\) case. For the 70 \(\mathrm{keV}\) case in Fig. 7 only the two loss configurations including \(\mathrm{VGG}_{70}\) sufficiently correct for the rings. In Fig. 8 only our proposed spectral loss \(\mathrm{VGG}_{70}\)-L1 remains successful. The fact that the network seems to perform fairly well on head scans, despite being trained on chest and abdomen scans, is indicating that the network has learned some general features that are seemingly robust to the specific anatomy considered. Out of the loss configurations considered, only our proposed spectral loss, \(\mathrm{VGG}_{70}\)-L1, is able to produce ring corrected 40, 70, and 100 \(\mathrm{keV}\) virtual monoenergetic images for both the KiTS19 and NSCLC case. Removing any of the components of \(\mathrm{VGG}_{70}\)-L1, the L1-loss, the \(\mathrm{VGG}_{70}\)-loss, our using VGG instead of \(\mathrm{VGG}_{70}\), produces some failure to correct for the rings at the energy levels considered. To further analyze the processed images, we consider the profile of a vertical line at pixel 512 going through the spine in the KiTS19 case and the head in the NSCLC case for the 70 \(\mathrm{keV}\) monoenergetic images in Fig. 4 and Fig. 7. The results can be found in Fig. 9 and Fig. 10. In Fig. 9, we can see that \(\mathrm{VGG}_{70}\) and \(\mathrm{VGG}_{70}\)-L1 seem to be doing a very good job reproducing truth. There is no visible introduction of bias or change in standard deviation. For the remaining loss configurations we can see that they are mostly doing a good job, save for large deviations near the center. This is likely due to the remaining ring artifacts. The conclusions from Fig. 10 are largely the same. Again, although slightly less obviously so, \(\mathrm{VGG}_{70}\) and \(\mathrm{VGG}_{70}\)-L1 are the best performing setups. Between the two only our proposed spectral loss, \(\mathrm{VGG}_{70}\)-L1, is capable of correcting for ring artifacts at a range of energy levels. Figure 4: Example slice from the KiTS19 test set. 70 \(\mathrm{keV}\) virtual monoenergetic images. Display window [-160,240] HU. (a) ring artifact free (truth), (b) ring corrupted (observed), (c) L2, (d) L1, (e) VGG, (f) \(\mathrm{VGG}_{70}\), (g) VGG-L1, (h) \(\mathrm{VGG}_{70}\)-L1. Figure 3: Example slice from the KiTS19 test set. 40 \(\mathrm{keV}\) virtual monoenergetic images. Display window [-160,240] HU. (a) ring artifact free (truth), (b) ring corrupted (observed), (c) L2, (d) L1, (e) VGG, (f) \(\mathrm{VGG}_{70}\), (g) VGG-L1, (h) \(\mathrm{VGG}_{70}\)-L1. Figure 5: Example slice from the KiTS19 test set. 100 \(\mathrm{keV}\) virtual monoenergetic images. Display window [-160,240] HU. (a) ring artifact free (truth), (b) ring corrupted (observed), (c) L2, (d) L1, (e) VGG, (f) \(\mathrm{VGG}_{70}\), (g) VGG-L1, (h) \(\mathrm{VGG}_{70}\)-L1. Figure 6: Example slice from the NSCLC test set. 40 \(\mathrm{keV}\) virtual monoenergetic images. Display window [-160,240] HU. (a) ring artifact free (truth), (b) ring corrupted (observed), (c) L2, (d) L1, (e) VGG, (f) \(\mathrm{VGG}_{70}\), (g) VGG-L1, (h) \(\mathrm{VGG}_{70}\)-L1. Figure 8: Example slice from the NSCLC test set. 100 \(\mathrm{keV}\) virtual monoenergetic images. Display window [-160,240] HU. (a) ring artifact free (truth), (b) ring corrupted (observed), (c) L2, (d) L1, (e) VGG, (f) \(\mathrm{VGG}_{70}\), (g) VGG-L1, (h) \(\mathrm{VGG}_{70}\)-L1. Figure 7: Example slice from the NSCLC test set. 70 \(\mathrm{keV}\) virtual monoenergetic images. Display window [-160,240] HU. (a) ring artifact free (truth), (b) ring corrupted (observed), (c) L2, (d) L1, (e) VGG, (f) \(\mathrm{VGG}_{70}\), (g) VGG-L1, (h) \(\mathrm{VGG}_{70}\)-L1. Figure 10: Profile of a vertical line through spine in the 70 \(\mathrm{keV}\) virtual monoenergetic image for the NSCLC case shown in Fig. 7. (a) L2, (b) L1, (c) VGG, (d) \(\mathrm{VGG}_{70}\), (e) VGG-L1, (f) \(\mathrm{VGG}_{70}\)-L1. Figure 9: Profile of a vertical line through spine in the 70 \(\mathrm{keV}\) virtual monoenergetic image for the KiTS19 case shown in Fig. 4. (a) L2, (b) L1, (c) VGG, (d) \(\mathrm{VGG}_{70}\), (e) VGG-L1, (f) \(\mathrm{VGG}_{70}\)-L1. performers are L1 and L2. Note also that L1 outperforms L2 in terms of PSNR. This is a bit odd since it is equivalent to the network trained with the L1-loss achieving lower L2-loss than a network trained with the L2-loss. One possible explanation for this is that the L2-loss is more prone to get stuck in local minima due to its smoothness and convexity properties [48]. Another possibility, since the difference in PSNR between the two loss configurations is very small, is that it might simply be due to stochastic variation in the optimization procedure. Another thing that sticks out in Table 1 is that \(\mathrm{VGG_{70}}\) achieves lower SSIM and PSNR than observed for both test sets. In other words, according to these metrics, the processed images are worse than the ring corrupted ones. That \(\mathrm{VGG_{70}}\) performs worse than observed is likely explained by the fact that \(\mathrm{VGG_{70}}\) only operates on virtual monoenergetic images whereas these results are based on the material basis images. In particular, we can see a steady improvement for \(\mathrm{VGG_{70}}\)-L1 where we have added the L1-loss, which operates on the material basis images. In essence, \(\mathrm{VGG_{70}}\) only works for the, non-spectral, 70 \(\mathrm{keV}\) case. Comparing performance on the two test sets, it may seem at first glance as if there is a drop in performance on the NSCLC test set. However, note that the percentage improvement over observed is actually higher on average for the NSCLC test set. To further evaluate the processed images we consider the first two moments of the distribution of the pixels in two roughly flat ROIs, indicated red, in Fig. 4 and 7. The results are available in Table 2. We treat "truth" as the gold standard. In other words, a perfect model would reproduce the mean and standard deviation of truth. By comparing the results for truth and observed we can see that the rings mainly impact the standard deviation. The mean, on the other hand, remains largely unaffected. L2 achieved the lowest error in reproducing the mean for the KiTS19 and the NSCLC case and \(\mathrm{VGG_{70}}\) is the top performer in terms of reproducing the standard deviation. Although \(\mathrm{VGG_{70}}\)-L1 is not the best at reproducing the mean nor standard deviation, it does a very good job getting both approximately correct simultaneously. For the KiTS19 case, the error for both standard deviation and mean is less than one percent. For the NSCLC case, it is around two percent. Hence, there is tentatively a drop in performance when applying the network to a head scan. Overall, the results in Table 2, in conjunction with the qualitative results, indicate that the network trained with our spectral loss does a good job faithfully reproducing truth. detector calibration. However, this calibration cause wear on the X-ray tube and may be very time-consuming. In this paper, we have proposed a deep learning-based ring correction technique for photon-counting spectral CT. We have demonstrated a proof-of-concept of how a CNN can be used to satisfactorily correct for ring artifacts whilst preserving fine details that are of clinical importance. Our proposed method is not meant to replace detector calibration, but rather to complement it. Having available an image processing technique for ring artifact correction may, for instance, allow postponing the calibration process to a more convenient time and enable higher patient throughout as it allows for longer intervals between calibration. In addition, increased robustness to detector inhomogeneity can also improve image quality in situations where the recommended calibration procedure has not been followed properly. While our results are very promising, this method is not without its limitations. In particular, we found the network unable to fully correct for ring artifacts in some slices imaging the head. This is most likely because the anatomy of the head is very different from other parts of the body. There is bone completely encapsulating soft tissue. The first obvious step in trying to improve the performance of skull images is to include head scans in the training data. Recall that the network is currently trained on a dataset including only chest and abdomen scans. In future research, we plan to train on a larger a more diverse dataset, containing head scans, and see if we can ensure high performance even on these problematic slices. Moreover, the network is currently trained on a continuous ring pattern, which is intuitive although not the most realistic. In a more realistic scenario, the types of ring artifacts that one may encounter are more likely in the shape of single rings, partial rings, bands, and combinations of these. In future research, we will extend the results presented in this paper to ring patterns that occur more frequently in practice. We also plan to evaluate our proposed technique on clinical data. Another interesting avenue, left for future research, is to further explore the properties of the processed images. Here we do an initial analysis which mainly relies on qualitative results. One could do this more formally using tools from medical imaging such as contrast-to-noise-ratio (CNR), signal-to-noise-ratio (SNR), modulation transfer function (MTF), and the noise power spectrum (NPS). For the results in this paper we trained the network on randomly extracted \(512\times 512\) patches. As mentioned above, this has numerous benefits including a regularizing effect and reduced graphics memory requirements. It might be beneficial to train on smaller patches, say \(128\times 128\), as one could extract more patches, augmenting the dataset, within the same computational budget. However, initial results indicated that the network failed to properly generalize from smaller patches to full images. This is in contrast to the results in [28], where they train a 2D network, very similar to ours, on patches that are \(64\times 64\). One key difference between their work and ours is that we are concerned with spectral CT and pass a pair of material basis images, instead of a single CT image, through the network. It is possible that larger patch sizes are necessary to achieve adequate performance in the spectral case. Further exploring the issue of optimal patch size if left for future research. Moreover, GANs have been used very successfully for ring correction [27] and image denoising [19; 24; 17]. Hence, a natural extension of the work in this paper is to combine our proposed spectral loss with an adversarial loss. We did some experiments, not shown in this paper, using an adversarial loss based on the WGAN-GP [21; 22]. An adjusted version of the discriminator used in PatchGAN [49] was used as critic. Initial results indicated that combining the WGAN-GP with our spectral loss did not vastly improve the performance of our network and we therefore deemed the significantly higher computational cost unwarranted. Nevertheless, this is an interesting avenue of research that we plan to explore in future work. ## 5 Conclusion This paper is a proof-of-concept of using a CNN to correct for ring artifacts, while preserving fine details, in the material basis images in photon-counting spectral CT. This was achieved by training a basic UNet with a spectral loss that combines a perceptual loss operating on 70 \(\mathrm{keV}\) virtual monoenergetic images with a L1-loss operating on the material basis images. We have demonstrated that the network is capable of producing ring corrected virtual monoenergetic images at a range of energy levels. ## Acknowledgment We thank the PDC Center for High Performance Computing, KTH Royal Institute of Technology, Sweden, for providing access to computing resources used in this research. Model training was enabled by the Berzelius resource provided by the Knut and Alice Wallenberg Foundation at the National Supercomputer Centre.
2310.10102
KAKURENBO: Adaptively Hiding Samples in Deep Neural Network Training
This paper proposes a method for hiding the least-important samples during the training of deep neural networks to increase efficiency, i.e., to reduce the cost of training. Using information about the loss and prediction confidence during training, we adaptively find samples to exclude in a given epoch based on their contribution to the overall learning process, without significantly degrading accuracy. We explore the converge properties when accounting for the reduction in the number of SGD updates. Empirical results on various large-scale datasets and models used directly in image classification and segmentation show that while the with-replacement importance sampling algorithm performs poorly on large datasets, our method can reduce total training time by up to 22% impacting accuracy only by 0.4% compared to the baseline. Code available at https://github.com/TruongThaoNguyen/kakurenbo
Truong Thao Nguyen, Balazs Gerofi, Edgar Josafat Martinez-Noriega, François Trahay, Mohamed Wahib
2023-10-16T06:19:29Z
http://arxiv.org/abs/2310.10102v1
# KAKURENBO: Adaptively Hiding Samples in Deep Neural Network Training ###### Abstract This paper proposes a method for hiding the least-important samples during the training of deep neural networks to increase efficiency, i.e., to reduce the cost of training. Using information about the loss and prediction confidence during training, we adaptively find samples to exclude in a given epoch based on their contribution to the overall learning process, without significantly degrading accuracy. We explore the converge properties when accounting for the reduction in the number of SGD updates. Empirical results on various large-scale datasets and models used directly in image classification and segmentation show that while the with-replacement importance sampling algorithm performs poorly on large datasets, our method can reduce total training time by up to 22% impacting accuracy only by 0.4% compared to the baseline. Code available at [https://github.com/TruongThaoNguyen/kakurenbo](https://github.com/TruongThaoNguyen/kakurenbo) ## 1 Introduction Empirical evidence shows the performance benefits of using larger datasets when training deep neural networks (DNN) for computer vision, as well as in other domains such as language models or graphs [1]. More so, attention-based models are increasingly employed as pre-trained models using unprecedented dataset sizes, e.g. the JFT-3B dataset consists of nearly three billion images, annotated with a class-hierarchy of around 30K labels [2], LIAON-5B provides 5,85 billion CLIP-filtered image-text pairs that constitute over 240TB [3]. A similar trend is also observed in scientific computing, e.g., DeepCAM, a climate simulation dataset, is over 8.8TB in size [4]. Furthermore, the trend of larger datasets prompted efforts that create synthetic datasets using GANS [5] or fractals [6]. The downside of using large datasets is, however, the ballooning cost of training. For example, it has been reported that training models such as T5 and AlphaGo cost $1.3M [7] and $35M [8], respectively. Additionally, large datasets can also stress non-compute parts of supercomputers and clusters used for DNN training (e.g., stressing the storage system due to excessive I/O requirements [9; 10]). In this paper, we are focusing on accelerating DNN training over large datasets and models. We build our hypothesis on the following observations on the effect of sample quality on training: a) _biased with-replacement sampling_ postulates that not all samples are of the same importance and a biased, with-replacement sampling method can lead to faster convergence [11; 12], b) _data pruning_ methods show that when select samples are pruned away from a dataset, the predication accuracy that can be achieved by training from scratch using the pruned dataset is similar to that of the original dataset [13; 14; 15; 16]. Our hypothesis is that if samples have a varying impact on the learning process and their impact decreases as the training progresses, then we can in real-time, adaptively, exclude samples with the least impact from the dataset during neural network training. In this paper, we dynamically hide samples in a dataset to reduce the total amount of computing and the training time, while maintaining the accuracy level. Our proposal, named KAKURENBO, is built upon two pillars. First, using combined information about the loss and online estimation of the historical prediction confidence (see Section 3.1) of input samples, we adaptively exclude samples that contribute the least to the overall learning process on a per-epoch basis. Second, in compensation for the decrease in the number of SGD steps, we derive a method to dynamically adjust the learning rate and the upper limit on the number of samples to hide in order to recover convergence rate and accuracy. We evaluate performance both in terms of reduction in wall-clock time and degradation in accuracy. Our main results are twofold: first, we show that decaying datasets by eliminating the samples with the least contribution to learning has no notable negative impact on the accuracy and convergence and that the overhead of identifying and eliminating the least important samples is negligible. Second, we show that decaying the dataset can significantly reduce the total amount of computation needed for DNN training. We also find that state-of-the-art methods such as importance sampling algorithm [11], pruning [13], or sample hiding techniques [17; 18] performs poorly on large-scale datasets. To the contrary, our method can reduce training time by \(10.4\%\) and \(22.4\%\) on ImageNet-1K [19] and DeepCAM [4], respectively, impacting Top-1 accuracy only by \(0.4\%\). ## 2 Background and Related Work As the size of training datasets and the complexity of deep-learning models increase, the cost of training neural networks becomes prohibitive. Several approaches have been proposed to reduce this training cost without degrading accuracy significantly. Table 1 summarizes related work against this proposal. This section presents the main state-of-the-art techniques. Related works are detailed in the Appendix-E. **Biased with-Replacement Sampling** has been proposed as a method to improve the convergence rate in SGD training [11; 12]. Importance sampling is based on the observation that not all samples are of equal _importance_ for training, and accordingly replaces the regular uniform sampling used to draw samples from datasets with a biased sampling function that assigns a likelihood to a sample being \begin{table} \begin{tabular}{|p{56.9pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Approach** & **Method** & \multicolumn{2}{c|}{**Merits (+)**} & \multicolumn{1}{p{113.8pt}|}{**Oline/ Offline (Bottleneck)**} & \multicolumn{1}{p{113.8pt}|}{**Practical Overhead (Constr)**} \\ \hline **Biased w/** **Replacement Sampling** & Importance Sampling [11] & * Theoretically faster convergence & Offline & Sorting samples & O(\(N_{\textit{log}}(N)\)) \\ \cline{3-6} & & & No demonstrated speedup on large datasets (Section 4) & Online & Sorting samples & O(\(N_{\textit{log}}(N)\)) \\ \cline{3-6} & & & No demonstrated speedup & & & \\ \hline **Data** & Engineers Scores [13] & * Robust & & & & \\ drawn proportional to its importance; the more important the sample is, the higher the likelihood it would be selected. The with-replacement strategy of importance sampling maintains the total number of samples the network trains on. Several improvements over importance sampling have been proposed for distributed training [22], or for estimating the importance of samples [12; 23; 24; 25; 26]. Overall, biased with-replacement sampling aims at increasing the convergence speed of SGD by focusing on samples that induce a measurable change in the model parameters, which would allow a reduction in the number of epochs. While these techniques promise to converge in fewer epochs on the whole dataset, each epoch requires computing the importance of samples which is time-consuming. **Data Pruning techniques** are used to reduce the size of the dataset by removing less important samples. Pruning the dataset requires training on the full dataset and adds significant overheads for quantifying individual differences between data points [27]. However, the assumption is that the advantage would be a reduced dataset that replaces the original datasets when used by others to train. Several studies investigate the selection of the samples to discard from a dataset[13; 15; 14; 16][28]. Pruning the dataset does reduce the training time without significantly degrading the accuracy [13; 14]. However, these techniques require fully training the model on the whole dataset to identify the samples to be removed, which is compute intensive. **Selective-Backprop**[17] combines importance sampling and online data pruning. It reduces the number of samples to train on by using the output of each sample's forward pass to estimate the sample's importance and cuts a fixed fraction of the dataset at each epoch. While this method shows notable speedups, it has been evaluated only on tiny datasets without providing any measurements on how accuracy is impacted. In addition, the authors allow up to 10% reduction in test error in their experiments. **Grad-Match**[18] is an online method that selects a subset of the samples that would minimize the gradient matching error. The authors approximate the gradients by only using the gradients of the last layer, use a per-class approximation, and run data selection every \(R\) epochs, in which case, the same subsets and weights will be used between epochs. Due to the infrequent selection of samples, Grad-Match often needs a larger number of epochs to converge to the same validation accuracy that can be achieved by the baseline [29]. Moreover, Grad-Match is impractical in distributed training, which is a de facto requirement in large dataset and models. Distributed Grad-Match would require very costly collective communication to collect the class approximations and to do the matching optimization. This is practically a very high cost for communication per epoch that could even exceed the average time per epoch. Figure 1: **Overview of KAKURENO. At each epoch, samples are filtered into two different subsets, the training list and the hidden list, based on their loss, prediction accuracy (PA), and prediction confidence (PC), with a maximum hidden fraction of \(F\). PA and PC are used to drive sample move back decisions. Samples in the training list are processed using uniform sampling without replacement. The loss and the prediction accuracy, calculated from the training process, are reused to filter samples in the next epoch. For samples on the hidden list, KAKURENO only calculates the loss and PA by performing the forward pass at the end of each epoch.** ## 3 KAKURENBO: Adaptively Hiding Samples In this work, we reduce the amount of work in training by adaptively choosing samples to hide in each epoch. We consider a model with a loss function \(\ell(\mathbf{w},\mathbf{x}_{n},\mathbf{y}_{n})\) where \(\left\{\mathbf{x}_{n},\mathbf{y}_{n}\right\}_{n=1}^{N}\) is a dataset of \(N\) sample-label pairs (\(x_{n}\in X\)), and \(G:X\to X\) is a function that is applied to hide certain samples during training, e.g., by ranking and cut-off some samples. Using SGD with a learning-rate \(\eta\) and batch size of \(B\), the update rule for each batch when training with original full dataset is \[\mathbf{w}_{t+1}=\mathbf{w}_{t}-\eta\frac{1}{B}\sum_{n\in\mathcal{B}\left(k \left(t\right)\right)}\nabla_{\mathbf{w}}\ell\left(\mathbf{w}_{t},\mathbf{x}_ {n},\mathbf{y}_{n}\right) \tag{1}\] where \(k\left(t\right)\) is sampled from \(\left[N/B\right]\triangleq\left\{1,\ldots,N/B\right\}\), \(\mathcal{B}\left(k\right)\) is the set of samples in batch \(k\) (to simplify, \(B\) is divisible by \(N\)). We propose to hide \(M\) examples by applying the a hiding function \(G\). We modify the learning rule to be \[\mathbf{w}_{t+1}=\mathbf{w}_{t}-\eta\frac{1}{B}\sum_{n\in\mathcal{B}\left(k \left(t\right)\right)}\nabla_{\mathbf{w}}\ell\left(\mathbf{w}_{t},G(\mathbf{x }_{n}),\mathbf{y}_{n}\right) \tag{2}\] using \(B\) batch at each step, which is composed of \(N/B\) steps. Since we exclude \(M\) samples, the aggregate number of steps is reduced from \(N/B\) to become \(\left(N-M\right)/B\), i.e., fixing the batch size and reducing the number of samples reduces the number of SGD iterations that are performed for each epoch. Sample hiding happens before presenting the input to each epoch. The training set that excludes the hidden samples (\(N-M\)) is then shuffled for the training to process with the typical w/o replacement uniform sampling method. Based on the above training strategy, we propose KAKURENBO, a mechanism to dynamically reduce the dataset during model training by selecting important samples. The workflow of our scheme is summarized in Figure 1. First, (B.1) we sort the samples of a dataset according to their loss. We then (B.2) select a subset of the dataset by _hiding_ a fixed fraction \(F\) of the data: the samples with the lowest loss are removed from the training set. Next, (B.3) hidden samples that maintain a correct prediction with high confidence (see Section 3.1) are moved back to the epoch training set. The training process (C) uses uniform sampling without replacement to pick samples from the training list. KAKURENBO adapts the learning rate (C.2) to maintain the pace of the SGD. At the end of the epoch, we perform the forward pass on samples to compute their loss and the prediction information on the up-to-date model (D). However, because calculating the loss for all samples in the dataset is prohibitively compute intensive [11], we propose to reuse the loss computed during the training process, which we call _lagging_ loss (D.2). We only recompute the loss of samples from the hidden list (D.1). In the following, we detail the steps of KAKURENBO. ### Hidden Samples Selection We first present our proposed algorithm to select samples to hide in each epoch. We follow the observation in [11] that not all the samples are equal so that not-too-important samples can be hidden during training. An important sample is defined as the one that highly contributes to the model update, e.g., the gradient norm \(\nabla_{\mathbf{w}}\ell\left(\mathbf{w}_{t},\mathbf{x}_{n},\mathbf{y}_{n}\right)\) in Equation 1. Removing the fraction \(F\) of samples with the least impact on the training model from the training list could reduce the training time, i.e., the required computing resource, without affecting the convergence of the training process. Selecting the fraction \(F\) is arbitrary and driven by the dataset/model. If the fraction \(F\) is too high, the accuracy could drop. In contrast, the performance gained from hiding samples will be limited if \(F\) is small, or potentially less than the overhead to compute the importance of samples. In this work, we aim to design an adaptive method to select the fraction \(F^{*}\) in each epoch. We start from a tentative maximum fraction \(F\) at the beginning of the training process. We then carefully select the hidden samples from \(F\) based on their importance and then move the remaining samples back to the training set. That is, at each epoch a dynamic hiding fraction \(F^{*}\) is applied. It is worth noting that the maximum fraction number \(F\) does not need to be strictly accurate in our design; it is a maximum ceiling and not the exact amount of samples that will be hidden. However, if the negative impact of hiding samples, i.e., becomes too high, it could significantly affect the accuracy. For example, when a high maximum fraction \(F\) is set and/or when most of the samples have nearly the same absolute contribution to the update, e.g., at the latter epoch of the training process. We investigate how to choose the maximum hiding fraction in each epoch in Section 3.3. **Moving Samples Back:** since the loss is computed in the forward pass, it is frequently used as the metric for the importance of the sample, i.e. samples with high loss contribute more to the update and are thus important [11; 22]. However, the samples with the smallest loss do not necessarily have the least impact (i.e., gradient norm) on the model, which is particularly true at the beginning of the training, and removing such high-impact samples may hurt accuracy. To mitigate the misselection of important samples as unimportant ones, we propose an additional rule to filter the low-loss samples based on the observation of historical prediction confidence [13]. The authors in [13] observed that some samples have a low frequency of toggling back from being classified correctly to incorrectly over the training process. Such samples can be pruned from the training set eternally. Because estimating the per-sample prediction confidence before training (i.e., offline) is compute-intensive, in this work, we perform an online estimation to decide whether an individual sample has a history of correct prediction with high confidence or not in a given epoch. Only samples that have low loss and sustain correct prediction with high confidence in the current epoch are hidden in the following epoch. A sample is correctly predicted with high confidence at an epoch \(e\) if it is predicted correctly (**PA**) and the prediction confidence (**PC**) is no less than a threshold \(\tau\), which we call the _prediction confidence threshold_, at the previous epoch. In addition to the prediction confidence of a given sample (\(x\), \(y\)) is the probability that the model predicts this sample to map to label \(y\): \[\begin{split} out=model(\mathbf{w}_{e},x,y)\\ PC=\max_{k}(\sigma(out_{k}))\end{split} \tag{3}\] where \(\sigma\) is a sigmod (softmax) activation function. In this work, unless otherwise mentioned, we set the prediction confidence threshold to \(\tau=0.7\) as investigated in Section 4.3. ### Reducing the Number of Iterations in Batch Training: Learning Rate Adjustment After hiding samples, KAKURENBO uses uniform without replacement sampling to train on the remaining samples from the training set. In this section, we examine issues related to convergence when reducing the number of samples and we provide insight into the desirable convergence properties of adaptively hiding examples. Implicit bias in the SGD training process may lead to convergence problems [30]: when reducing the total number of iterations at fixed batch sizes, SGD selects minima with worse generalization. We examine the selection mechanism in SGD when reducing the number of iterations at a fixed batch size. For optimizations of the original datasets, i.e., without example hiding, we use loss functions of the form \[f\left(\mathbf{w}\right)=\frac{1}{N}\sum_{n=1}^{N}\ell\left( \mathbf{w},\mathbf{x}_{n},\mathbf{y}_{n}\right)\,, \tag{4}\] where \(\left\{\mathbf{x}_{n},\mathbf{y}_{n}\right\}_{n=1}^{N}\) is a dataset of \(N\) data example-label pairs and \(\ell\) is the loss function. We use SGD with batch of size \(B\) and learning-rate \(\eta\) with the update rule \[\mathbf{w}_{t+1}=\mathbf{w}_{t}-\eta\frac{1}{B}\sum_{n\in\mathcal{B}(k(t))} \nabla_{\mathbf{w}}\ell\left(\mathbf{w}_{t},\mathbf{x}_{n},\mathbf{y}_{n} \right)\,. \tag{5}\] for without replacement sampling, \(B\) divisible by \(N\) (to simplify), and \(k\left(t\right)\) sampled uniformly from \(\left\{1,\dots,N/B\right\}\). When using an over-parameterized model as is the case with deep neural networks, we typically converge to a minimum \(\mathbf{w}^{*}\) that is a global minimum on all data points \(N\) in the training set [31; 14]. Following Hoffer et al. [32], linearizing the dynamics of Eq. 5 near \(\mathbf{w}^{*}\) (\(\forall n:\nabla_{\mathbf{w}}\ell\left(\mathbf{w}^{*},\mathbf{x}_{n},\mathbf{ y}_{n}\right)=0\)) gives \[\mathbf{w}_{t+1}=\mathbf{w}_{t}-\eta\frac{1}{B}\sum_{n\in\mathcal{B}(k(t))} \mathbf{H}_{n}\mathbf{w}_{t}\, \tag{6}\] where we assume \(\mathbf{w}^{*}=0\) since the models we target are over-parameterized (i.e., deep networks) leading to converge to a minimum \(\mathbf{w}^{*}\). We also assume \(\mathbf{H}_{n}\triangleq\nabla_{\mathbf{w}}^{2}\ell\left(\mathbf{w},\mathbf{x}_ {n},\mathbf{y}_{n}\right)\) represents the per-example loss Hessian. SGD can select only certain minima from the many potential different global minima for the loss function of a given the full training set \(N\) (and without loss of generality, for the training dataset after hiding samples \(N-M\)). The selection of minima by SGD depends on the batch sizes and learning rate through the averaged Hessian over batch \(k\) \[\left\langle\mathbf{H}\right\rangle_{k}\triangleq\frac{1}{B}\sum_{n\in\mathcal{ B}(k)}\mathbf{H}_{n}\] and the maximum over the maximal eigenvalues of \(\left\{\left\langle\mathbf{H}\right\rangle_{k}\right\}_{k=1}^{N/B}\) \[\lambda_{\max}=\max_{k\in[N/B]}\max_{\forall\mathbf{v}:\|\mathbf{v}\|=1} \mathbf{v}^{\top}\left\langle\mathbf{H}\right\rangle_{k}\mathbf{v}. \tag{7}\] This \(\lambda_{\max}\) affects SGD through the Theorem proved by Hoffer et al. [32]: the iterates of SGD (Eq. 6) will converge if \[\lambda_{\max}<\frac{2}{\eta}\] The theorem implies that a high learning rate leads to convergence to be for global minima with low \(\lambda_{\max}\) and low variability of \(\mathbf{H}_{n}\). Since in this work we are fixing the batch size, we maintain \(\lambda_{\max}\), the variability of \(\left\langle\mathbf{H}\right\rangle_{k}\). Therefore, certain minima with high variability in \(\mathbf{H}_{n}\) will remain accessible to SGD. Now SGD may converge to these high variability minima, which were suggested to exhibit worse generalization performance than the original minima [33]. We mitigate this problem by reducing the delta by which the original learning rate decreases the learning rate (after the warm-up phase [34]). That way we make these new minima inaccessible again while keeping the original minima accessible. Specifically, KAKURENBO adjusts the learning rate at each epoch (or each iteration) \(e\) by the following rule: \[\eta_{e}=\eta_{base,e}\times\frac{1}{1-F_{e}} \tag{8}\] where \(\eta_{base,e}\) is the learning rate at epoch \(e\) in the non-hiding scenario and \(F_{e}\) is the hiding fraction at epoch \(e\). By multiplying the base learning rate with a fraction \(\frac{1}{1-F_{e}}\), KAKURENBO is independent of the learning rate scheduler of the baseline scenario and any other techniques related to the learning rate. ### Adjusting the Maximum Hidden Fraction \(F\) Merely changing the learning rate may not be sufficient, when some minima with high variability and low variability will eventually have similar \(\lambda_{\max}\), so SGD will not be able to discriminate between these minima. To account for this, we introduce a schedule to reduce the maximum hidden fraction. For the optimum of the set of hidden samples, \(\mathbf{w}_{\mathbf{M}}=G(\mathbf{x}_{n})\) and an overall loss function \(F(\cdot)\) that acts as a surrogate loss for problems which are sums of non-convex losses \(f_{i}(\mathbf{w})\), where each is individually non-convex in \(\mathbf{w}\). With Lipschitz continuous gradients with constant \(L_{i}\) we can assume \[\|\nabla f_{i}(\mathbf{w}_{\mathbf{1}})-\nabla f_{i}(\mathbf{w}_{\mathbf{2}}) \|\leq L_{i}\|\mathbf{w}_{\mathbf{1}}-\mathbf{w}_{\mathbf{2}}\|\] Since we are hiding samples when computing the overall loss function \(F(\cdot)\), we assume each of the functions \(f_{i}(.)\) shares the same minimum value \(\min_{\mathbf{w}}f_{i}(\mathbf{w})=\min_{\mathbf{w}}f_{j}(\mathbf{w})\ \forall\ i,j\). We extend the proof of the theorem on the guarantees for a linear rate of convergence for smooth functions with strong convexity [35] to the non-convex landscape obtained when training with hidden samples (proof in Appendix A) **Lemma 1**.: _Let \(F(\mathbf{w})=\mathbb{E}[f_{i}(\mathbf{w})]\) be non-convex. Set \(\sigma^{2}=\mathbb{E}[\|\nabla f_{i}(\mathbf{w}_{\mathbf{M}})\|^{2}]\) with \(\mathbf{w}^{*}:=argminF(\mathbf{w})\). Suppose \(\eta\leq\frac{1}{\sup_{i}L_{i}}\). Let \(\Delta_{t}=\mathbf{w}_{\mathbf{t}}-\mathbf{w}\). After \(T\) iterations, SGD satisfies:_ \[\mathbb{E}\left[\|\Delta_{T}\|^{2}\right]\leq(1-2\eta\hat{C})^{T}\|\Delta_{0} \|^{2}+\eta R_{\sigma} \tag{9}\] _where \(\hat{C}=\lambda(1-\eta\sup_{i}L_{i})\) and \(R_{\sigma}=\frac{\sigma^{2}}{\hat{C}}\)._ Since the losses \(f_{i}(\mathbf{w})\) are effectively dropping for individual samples, driven by the weight update, we thus drop the maximum fraction that can be hidden to satisfy Eq. 9. Specifically, we suggest selecting a reasonable number that is not too high at the first epoch, e.g, \(F=0.3\). We then adjust the maximum fraction per epoch (denoted as \(F_{e}\)) to achieve \(F_{e}\). We suggest using step scheduling, i.e., to reduce the maximum hiding fraction gradually with a factor of \(\alpha\) by the number of epochs increases. For example, we set \(\alpha\) as [1, 0.8, 0.6, 0.4] at epoch [0, 30, 60, 80] for ImageNet-1K and [0, 60, 120, 180] for CIFAR-100, respectively. ### Update Loss and Prediction Our technique is inspired by an observation that the importance of each sample of the local data does not change abruptly across multiple SGD iterations [22]. We propose to reuse the loss and historical prediction confidence, computed during the training process, and only recompute those metrics for samples from the hidden list. Specifically, the loss and historical prediction confidence of samples are computed only one time at each epoch, i.e., when the samples are fed to the forward pass. It is not re-calculated at the end of each epoch based on the latest model. Therefore, only samples of the last training iteration of a given epoch have an up-to-date loss. Furthermore, if we re-calculate the loss of hidden samples, i.e., only skip the backward and weight update pass of these samples, the loss of hidden samples is also up-to-date. For instance, if we cut off 20% of samples, we have nearly 20% up-to-date losses and 80% of not-up-to-date losses at the end of each epoch As the result, in comparison to the baseline scenario, KAKURENBO helps to reduce the total backward and weight update time by a fraction of \(F_{e}\) while it does not require any extra forward time ## 4 Evaluation We evaluate KAKURENBO using several models on various datasets. We measure the effectiveness of our proposed method on two large datasets. We use Resnet50 [36] and EfficientNet [37] on ImageNet-1K [19], and DeepCAM [4], a scientific image segmentation model with its accompanying dataset. To confirm the correctness of the baseline algorithms we also use WideResNet-28-10 on the CIFAR-100 dataset. Details of experiment settings and additional experiments such as ablation studies and robustness evaluation are reported in Appendix-B and Appendix-C. We compare the following training strategies: * **Baseline**: We follow the original training regime and hyper-parameters suggested by their authors using uniform sampling without replacement. * **Importance Sampling With Replacement**[11] **(ISWR)**: In each iteration, each sample is chosen with a probability proportional to its loss. The with-replacement strategy means that a sample may be selected several times during an epoch, and the total number of samples fed to the model is the same as the baseline implementation. * **FORGET** is an online version of a pruning technique [13]: instead of fully training the model using the whole dataset before pruning, we train it for 20 epochs, and a fraction \(F\) of forgettable samples (i.e. samples that are always correctly classified) are pruned from the dataset1. The training then restarts from epoch \(0\). We report the total training time that includes the 20 epochs of training with the whole dataset, and the full training with the pruned dataset. Footnote 1: We choose the samples to remove by increasing number of forgetting events as in [13]. * **Selective Backprop (SB)**[17] prioritizes samples with high loss at each iteration. It performs the forward pass on the whole dataset, but only performs backpropagation on a subset of the dataset. * **Grad-Match**[18] trains using a subset of the dataset. Every \(R\) epoch, a new subset is selected so that it would minimize the gradient matching error. * **KAKURENBO**: our proposed method where samples are hidden dynamically during training. It is worth noting that we follow the hyper-parameters reported in [38] for training ResNet-50, [39] for training WideResNet-28-10, [37] for training EfficientNet-b3, and [4] for DeepCAM. We show the detail of our hyper-parameters in Appendix B. We configure ISWR, and FORGET to remove the same fraction \(F\) as KAKURENBO. For SB, we use the \(\beta=1\) parameter that results in removing \(50\%\) of samples. Unless otherwise mentioned, our default setting for the maximum hidden fraction \(F\) for KAKURENBO is \(30\%\), except for the CIFAR-100 small dataset, for which we use \(10\%\) (see below). To maintain fairness in comparisons between KAKURENBO and other state-of-the-art methods, we use the same model and dataset with the same hyper-parameters. This would mean we are not capable of using state-of-the-art hyper-parameters tuning methods to improve the accuracy of ResNet-50/ImageNet (e.g., as in [40]). That is since the state-of-the-art hyper-parameters tuning methods are not applicable to some of the methods we compare with. Particularly, we can not apply GradMatch for training with a large batch size on multiple GPUs. Thus, we compare KAKURENBO with GradMatch using the setting reported in [18], i.e., CIFAR-100 dataset, ResNet-18 model. ### Accuracy The progress in the top-1 test accuracy with a maximum hiding fraction of \(0.3\) is shown in Figure 2. Table 2 summarizes the final accuracy for each experiment. We present data on the small dataset of CIFAR-100 to confirm the correctness of our implementation of ISWR, FORGET, and SB. Table 3 reports the single GPU accuracy obtained with Grad-Match because it cannot work on distributed systems. For CIFAR-100, we report similar behavior as reported in the original work on ISWR [11], SB [17], FORGET [13], and Grad-Match [18]: ISWR, FORGET, and Grad-Match degrade accuracy by approximately 1%, while SB and KAKURENBO roughly perform as the baseline. KAKURENBO on CIFAR-100 only maintains the baseline accuracy for small fractions (e.g. \(F=0.1\)). When hiding a larger part of the dataset, the remaining training set becomes too scarce, and the model does not generalize well. On the contrary, on large datasets such as ImageNet-1K, ISWR and KAKURENBO slightly improve accuracy (by \(0.2\)) in comparison to the baseline, while FORGET and SB degrade accuracy by \(1.2\%\) and \(3.5\%\), respectively. On DeepCAM, KAKURENBO does not affect the accuracy while ISWR degrades it by \(2.4\%\) in comparison to the baseline2. Table 4 reports the accuracy obtained for transfer learning. We do not report Grad-Match results because we could not apply it to this application. Using SB significantly degrades accuracy compared to the baseline, while ISWR, FORGET, and KAKURENBO maintains the same accuracy as the baseline. Especially, as reported in Figure 3, the testing accuracy obtained by KAKURENBO are varied when changing the maximum hiding fraction. We observe that for small hiding fractions, KAKURENBO achieves the same accuracy as \begin{table} \begin{tabular}{l|r r|r r|r r} \hline \hline \multirow{2}{*}{**Setting**} & \multicolumn{2}{c|}{**CIFAR-100**} & \multicolumn{3}{c}{**ImageNet-1K**} \\ & \multicolumn{2}{c|}{**WN-28-10**} & \multicolumn{2}{c|}{**ResNet-50**} & \multicolumn{2}{c|}{**EfficientNet-33**} & \multicolumn{2}{c}{**DeepCAM**} \\ \hline & **Acc.** & **Diff.** & **Acc.** & **Diff.** & **Acc.** & **Diff.** & **Acc.** & **Diff.** \\ \hline Baseline & 77.49 & & 74.89 & & 76.63 & & 78.14 \\ \hline ISWR & 76.51 & (-0.98) & 74.91 & (+0.02) & N/A & & 75.75 & (-2.39) \\ \hline FORGET & 76.14 & (-1.35) & 73.70 & (-1.20) & N/A & N/A & \\ \hline SB & 77.03 & (-0.46) & 71.37 & (-3.52) & N/A & N/A & \\ \hline KAKURENBO & 77.21 & (-0.28) & 75.15 & (+0.26) & 76.23 & (-0.5) & 77.42 & (-0.9) \\ \hline \hline \end{tabular} \end{table} Table 2: Max testing accuracy (Top-1) in percentage of KAKURENBO in the comparison with those of the Baseline and other SOTA methods. **Diff.** represent the gap to the Baseline. \begin{table} \begin{tabular}{l|r r} \hline \hline \multirow{2}{*}{**Setting**} & \multicolumn{2}{c}{**CIFAR-100**} \\ & \multicolumn{2}{c}{**ResNet-18**} \\ \hline & \multicolumn{2}{c}{**Acc. Time (sec)**} \\ \hline Baseline & 77.98 & 856 \\ \hline Grad-Match-0.3 & \begin{tabular}{r} 76.87 \\ (-1.11) \\ \end{tabular} & \begin{tabular}{r} 8104 \\ (-5.3\%) \\ \end{tabular} \\ \hline KAKURENBO-0.3 & \begin{tabular}{r} 77.05 \\ (-0.93) \\ \end{tabular} & \begin{tabular}{r} 8784 \\ (+2.7\%) \\ \end{tabular} \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison with Grad-Match in a single GPU (cutting fraction is set to \(0.3\). Figure 2: Convergence and speedup of KAKURENBO and importance sampling (ISWR). the baseline. When increasing hiding fractions, as expected, the degradation of the testing accuracy becomes more significant. ### Convergence Speedup and Training Time Here we discuss KAKURENBO's impact on training time. Figure 2 reports test accuracy as the function of elapsed time (note the X-axis), and reports the training time to a target accuracy. Table 4 reports the upstream training time of DeiT-Tiny-224. The key observation of these experiments is that KAKURENBO reduces the training time of Wide-ResNet by \(21.7\%\), of ResNet-50 by \(23\%\), of EfficientNet by \(13.7\%\), of DeepCAM by \(22.4\%\), and of DeiT-Tiny by \(15.1\%\) in comparison to the baseline training regime. Surprisingly, Importance Sampling With Replacement (ISWR) [11] introduces an overhead of \(34.8\%\) on WideNet, of \(41\%\) on ImageNet-1K and offers only a slight improvement of \(2.5\%\) on DeepCAM. At each epoch, ISWR processes the same number of samples as the baseline. Yet, it imposes an additional overhead of keeping track of the importance (i.e., the loss) of all input samples. While on DeepCAM it achieves a modest speedup due to its faster convergence, these experiments reveal that ISWR's behavior is widely different on large datasets than on the smaller ones previously reported [11; 17]. FORGET increases the training time of WideResNet by \(46.1\%\) because of the additional 20 epochs training on the whole dataset needed for pruning the samples. When the number of epoch is large, such as for ResNet50 that runs for 600 epochs, FORGET decreases the training time by \(17.9\%\), and for DeiT by \(14.4\%\). However, this reduction of training time comes at the cost of degradation of the test accuracy. On WideResNet and ResNet, SB performs similarly to KAKURENBO by reducing the training time without altering the accuracy. However, SB significantly degrades accuracy compared to the baseline for ImageNet and DeiT. It is worth noting that KAKURENBO has computation overheads for updating the loss and prediction (Step D in Figure 1), and sorting the samples based on the loss (Step A in Figure 1). For example, Figure 4 reports the measured speedup per epoch as compared to the baseline epoch duration. The speedup follows the same trend as the hiding rate. This is because reducing the number of samples in the training set impacts the speed of the training. The measured speedup does not reach the maximum hiding rate because of the computation overhead. The performance gain from hiding samples will be limited if the maximum hiding fraction \(F\) is small, or potentially less than the overhead to compute the importance score of samples. In experiments using multiple GPUs, those operations are performed in parallel to reduce the running time overhead. When using a single GPU on CIFAR-100 with ResNet-18 (Table 3), the computational overhead is bigger than the speedup gained from hiding \begin{table} \begin{tabular}{l|l|l|r r r r r} \hline \hline & Dataset & Metrics & Baseline & ISWR & FORGET & SB & KAKUR. \\ \hline Up & \multirow{2}{*}{Fractal-3K} & Loss & 3.26 & 3.671 & 3.27 & 4.18 & 3.59 \\ stream & & Time (min) & 623 & 719 & 533 & 414 & 529 \\ & & Impr. & - & (+15.4\%) & (-14.4\%) & (-33.5\%) & (-15.1\%) \\ \hline Down & \multirow{2}{*}{CIFAR-10} & Acc. (\%) & 95.03 & 95.79 & 95.85 & 93.59 & 95.28 \\ stream & & Diff. & - & (+0.76) & (+0.82) & (-1.44) & (+0.25) \\ \cline{2-7} & \multirow{2}{*}{CIFAR-100} & Acc. (\%) & 79.69 & 79.62 & 79.95 & 76.98 & 79.35 \\ & & Diff. & - & (-0.07) & (+0.26) & (-2.71) & (-0.34) \\ \hline \hline \end{tabular} \end{table} Table 4: Impact of KAKURENBO in transfer learning with DeiT-Tiny-224 model. Figure 3: Test accuracy vs. epoch of KAKURENBO with different maximum hiding fractions \(F\). samples. Thus, KAKURENBO takes more training time in this case. In short, KAKURENBO is optimized for large-scale training and provides more benefits when running on multiple GPUs. ### Ablation Studies **Impact of prediction confidence threshold \(\tau\).** Higher prediction confidence threshold \(\tau\) leads to a higher number of samples being moved back to the training set, i.e., fewer hidden samples at the beginning of the training process. At the end of the training process, when the model has is well-trained, more samples are predicted correctly with high confidence. Thus the impact of the prediction confidence threshold on the number of moved-back samples becomes less (as shown in Figure 4). The result in Table 5 shows that when we increase the threshold \(\tau\), we obtain better accuracy (fewer hidden samples), but at the cost of smaller performance gain. We suggest to set \(\tau=0.7\) in all the experiments as a good trade-off between training time and accuracy. **Impact of different components of KAKURENBO.** We evaluate how KAKURENBO's individual internal strategies, and their combination, affect the testing accuracy of a neural network. Table 6 reports the results we obtained when training ResNet-50 on ImageNet-1K3 with a maximum hiding fraction of \(40\%\). The results show that when only HE (Hiding Examples) of the \(40\%\) lowest loss samples is performed, accuracy slightly degrades. Combining HE with other strategies, namely MB (Move-Back), RF (Reducing Fraction), and LR (Learning Rate adjustment) gradually improves testing accuracy. In particular, all combinations with RF achieve higher accuracy than the ones without it. For example, the accuracy of v110 is higher than that of v1100 by about \(0.59\%\). We also observe that using LR helps to improve the training accuracy by a significant amount, i.e., from \(0.46\)% to \(0.83\)%. The MB strategy also improves accuracy. For example, the accuracy of v1010 is \(72.81\%\), compared to v1110 which is \(72.96\%\). This small impact of MB on the accuracy is due to moving back samples at the beginning of the training, as seen in Appendix C.3. By using all the strategies, KAKURENBO achieves the best accuracy of \(73.6\%\), which is very close to the baseline of \(73.68\%\). Footnote 3: We use the ResNet-50 (A) configuration in this evaluation as shown in Appendix-B ## 5 Conclusion We have proposed KAKURENBO, a mechanism that adaptively hides samples during the training of deep neural networks. It assesses the importance of samples and temporarily removes the ones that would have little effect on the SGD convergence. This reduces the number of samples to process at each epoch without degrading the prediction accuracy. KAKURENBO combines the knowledge of historical prediction confidence with loss and moves back samples to the training set when necessary. It also dynamically adapts the learning rate in order to maintain the convergence pace. We have demonstrated that this approach reduces the training time without significantly degrading the accuracy on large datasets. \begin{table} \begin{tabular}{l|c c c c|c} \hline \hline & \multicolumn{4}{c|}{**Component**} & \multicolumn{1}{c}{**Accuracy**} \\ & HE & MB & RF & LR & \\ \hline Baseline & \(\times\) & \(\times\) & \(\times\) & \(\times\) & 73.68 \\ \hline v1000 & ✓ & \(\times\) & \(\times\) & \(\times\) & 72.25 (-1.8\%) \\ v1001 & ✓ & \(\times\) & \(\times\) & ✓ & 73.08 (-0.7\%) \\ v1010 & ✓ & \(\times\) & ✓ & \(\times\) & 72.81 (-1.1\%) \\ v1011 & ✓ & \(\times\) & ✓ & ✓ & 73.27 (-0.4\%) \\ v1100 & ✓ & ✓ & \(\times\) & \(\times\) & 72.37 (-1.7\%) \\ v1101 & ✓ & ✓ & \(\times\) & ✓ & 73.09 (-0.7\%) \\ v1110 & ✓ & ✓ & ✓ & \(\times\) & 72.96 (-0.9\%) \\ KAKUR. (v1111) & ✓ & ✓ & ✓ & ✓ & 73.6 \\ \hline \hline \end{tabular} \end{table} Table 6: The impact of different components of KAKURENBO on testing accuracy including **HE**: Hiding \(F\)% lowest-loss examples, **MB**: Moving Back, **RF**: Reducing the Fraction by epoch, **LR**: Adjusting Learning Rate. Numbers inside the (.) indicate the gap in percentage compared to the full version of KAKURENBO. Figure 4: Reduction of hiding fraction, per epoch, and the resulting speedup. Acknowledgments This work was supported by JSPS KAKENHI under Grant Numbers JP21K17751 and JP22H03600. This paper is based on results obtained from a project, JPNP20006, commissioned by the New Energy and Industrial Technology Development Organization (NEDO). This work was supported by MEXT as "Feasibility studies for the next-generation computing infrastructure" and JST PRESTO Grant Number JPMJPR20MA. We thank Rio Yokota and Hirokatsu Kataoka for their support on the Fractal-3K dataset.
2301.09312
Enabling Hard Constraints in Differentiable Neural Network and Accelerator Co-Exploration
Co-exploration of an optimal neural architecture and its hardware accelerator is an approach of rising interest which addresses the computational cost problem, especially in low-profile systems. The large co-exploration space is often handled by adopting the idea of differentiable neural architecture search. However, despite the superior search efficiency of the differentiable co-exploration, it faces a critical challenge of not being able to systematically satisfy hard constraints such as frame rate. To handle the hard constraint problem of differentiable co-exploration, we propose HDX, which searches for hard-constrained solutions without compromising the global design objectives. By manipulating the gradients in the interest of the given hard constraint, high-quality solutions satisfying the constraint can be obtained.
Deokki Hong, Kanghyun Choi, Hye Yoon Lee, Joonsang Yu, Noseong Park, Youngsok Kim, Jinho Lee
2023-01-23T08:15:09Z
http://arxiv.org/abs/2301.09312v1
# Enabling Hard Constraints in Differentiable Neural Network and Accelerator Co-Exploration ###### Abstract. Co-exploration of an optimal neural architecture and its hardware accelerator is an approach of rising interest which addresses the computational cost problem, especially in low-profile systems. The large co-exploration space is often handled by adopting the idea of differentiable neural architecture search. However, despite the superior search efficiency of the differentiable co-exploration, it faces a critical challenge of not being able to systematically satisfy hard constraints such as frame rate. To handle the hard constraint problem of differentiable co-exploration, we propose HDX, which searches for hard-constrained solutions without compromising the global design objectives. By manipulating the gradients in the interest of the given hard constraint, high-quality solutions satisfying the constraint can be obtained. + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote†: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted none: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted none: + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: ### DNN-Accelerator Co-exploration The early work on the co-exploration utilize variants of reinforcement learning, or evolutionary algorithm to leverage its simplicity (Bahdan et al., 2015; Chen et al., 2015; Chen et al., 2016; Li et al., 2017; Li et al., 2018; Li et al., 2019). Each candidate network is trained for evaluation, while the accelerator design is analyzed for hardware efficiency. These values create rewards used by the agent to create the next candidate solution. However, they all inherit the same problem from RL-based NAS methods in which they require expensive training to evaluate each candidate solution. To worsen the matter, co-exploration requires even larger network/hardware search space than searching only for networks. In such regard, differentiable approaches were adopted to co-exploration (Li et al., 2017). Auto-NBA (Li et al., 2018) used a differentiable accelerator search engine to build a joint-search pipeline, and DANCE (Li et al., 2019) trained auxiliary neural networks for hardware search and cost evaluation. However, none of the above properly addresses the hard constraint problem. In this work, we propose a holistic method of handling hard constraints on differentiable co-exploration. ## 3. Motivational Experiment The most straightforward and naive way to handle hard constraints within differentiable co-exploration would be to tune the relative weight to the hardware cost. For example, below is the loss function used in differentiable co-exploration (Li et al., 2018; Li et al., 2018). \[\mathcal{Loss}=\mathcal{Loss}_{CE}+\lambda_{Cost}Cost_{HW}, \tag{1}\] which is designed co-optimize accuracy and hardware cost simultaneously, and \(\lambda_{Cost}\) balances the two terms1. By increasing \(\lambda_{Cost}\), one can indirectly instruct the search process to consider hardware metrics more. However, giving a larger penalty does not directly lead to reduction in the value of a constrained metric. Figure 1 plots how changing \(\lambda_{Cost}\) in Eq. (1) from 0.001 to 0.010 affects the latency/energy and the classification error for CIFAR-10 dataset. Searches were done three times for each setting and plotted with the same colors with large dots for their averages. Even though some trend is observed that depends on \(\lambda_{Cost}\), inconsistency in both direction and variance of the trajectory is more dominant. Footnote 1: It is different from hyperparameters of typical machine learning formulation where the two terms serve toward a single objective. Consider a scenario where a designer wants to design a neural network-accelerator architecture pair with latency under some constraint (e.g., 33.3 ms), using the conventional co-exploration methods. The designer would try searching with some initial \(\lambda_{Cost}\) and try adjusting the value over the course of multiple searches. However, such inconsistency between \(\lambda_{Cost}\) and the latency makes it extremely difficult to obtain the adequate solution, not to mention the huge time cost of performing the search numerous times. Despite the difficulties that lie in tackling a hard-constrained co-exploration problem, designing an effective strategy is necessary. ## 4. Hard-Constrained Co-Exploration ### Problem Definition The mathematical formulation of hard-constrained differentiable co-exploration is as below: \[\operatorname*{arg\,min}_{\alpha,\beta}(\mathcal{Loss}_{NAS}(w^{* },net(\alpha))+\lambda_{Cost}Cost_{HW}(eval(\alpha,\beta))),\] \[\text{s.t.}\;\;w^{*}=\operatorname*{arg\,min}_{w}(\mathcal{Loss} _{NAS}(w,net(\alpha))),\;\;\;t\leq T, \tag{2}\] where \(t\) denotes the current value of constrained metric such as latency or energy, and \(T\) is the target value (e.g., 33.3 ms for latency). \(\alpha\) and \(\beta\) denote network architecture parameters and hardware accelerator configuration, respectively. \(w\) is the weights of the NAS supernet and \(net(\alpha)\) is the current dominant network architecture selected. \(eval(\alpha,\beta)\) indicates the hardware metrics evaluated for \(\alpha\) and \(\beta\). The objective of co-exploration is expressed using two distinct evaluation metrics, which are neural architecture loss (\(\mathcal{Loss}_{NAS}\)) and hardware cost (\(Cost_{HW}\)) defined from the user. ### Differentiable Co-exploration Framework Although our main contribution is that we enable hard constraints, we explain our framework for the co-exploration since they are closely related. Figure 2 illustrates the overall architecture of the proposed method, being similar to existing methods (Li et al., 2018; Li et al., 2018). Figure 2 (a) is the network search module. This module searches for network architecture by choosing a path from the supernet. The network structure is then fed to the evaluator module. The evaluator network \(eval()\) is the key to the differentiable co-exploration that enables the gradient to flow into the supernet, considering the relation between the hardware accelerator. It is a composition of two subnetworks: a hardware generator \(gen()\) and an estimator \(est()\). The hardware generator takes the neural architecture parameters as inputs and uses them to output the optimal hardware implementation (\(\beta\) from Eq. 2). It is jointly trained during the co-exploration so that the generator does not depend on certain cost function, and can adapt to the constraint. The estimator network outputs the hardware-related metrics by taking output of the generator and the network. It is pre-trained according to the network and the accelerator search space. For pre-training the estimator, traditional (non-differentiable) cost estimation frameworks such as MAESTRO (Li et al., 2018), Timeloop (Tlemelo et al., 2019), and Accelergy (Li et al., 2019) are used as ground truth. After pre-training, the estimator is frozen during the exploration and is only used to infer the hardware cost given a Figure 1. A motivational experiment. In each plot, we swept the value \(\lambda_{Cost}\) 0.001 to 0.010. It is clear that the trajectory is not strictly linear to \(\lambda_{cost}\) with high variations. network architecture. With these, we convert Eq. 2 as below: \[\operatorname*{s.t.} w^{*}=\operatorname*{arg\,min}_{w}(\mathcal{Loss}_{NAS}(w,net( \alpha))),\] \[v^{*}=\operatorname*{arg\,min}_{w}(Cost_{HW}(est(\alpha,gen(v, \alpha)))), \tag{3}\] where \(v\) is the weights for the hardware generator. ### Enabling Hard-Constraints with Gradient Manipulation In addition to the differentiable co-exploration methodology, we suggest the novel idea of gradient manipulation as an effective solution to the hard constraint problem. Direct manipulation of gradients is a strategy often used in achieving multiple goals, such as in continual learning (Srivastava et al., 2015) or differential equations (Kang et al., 2016). In this paper, we present a solution to apply gradient manipulation to the co-exploration problem in the interest of satisfying hard constraints. The diagrams on Figure 2 (b) and (c) show a high-level abstraction of our gradient manipulation method. The main idea is to artificially generate a force that can push the gradient in the direction that _agrees_ with the constraint. The conditions under which the method is applied to compute the new gradient \(g\) are defined as below: \[g=\begin{cases}g_{\mathcal{Loss}}&,\text{if }t\leq T\\ &\text{or }t>T\wedge g_{\mathcal{Loss}}\cdot g_{Const}\geq 0,\\ m_{\alpha}+g_{\mathcal{Loss}}&,\text{otherwise}\end{cases} \tag{4}\] \[g_{Const}=\frac{\partial\max(t-T,0)}{\partial\alpha}. \tag{5}\] In the above equation, \(g_{\mathcal{Loss}}\) is the original gradient from the global loss function defined as \[\mathcal{Loss}=\mathcal{Loss}_{NAS}+\lambda_{Cost}\cdot Cost_{HW}, \tag{6}\] as in Eq. 3, and \(g_{Const}\) is the gradient of constraint loss that we define as: \(Const=max(t-T,0)\). Note that \(t\) is a function of \(\alpha\), and thus can be backpropagated to find the gradient with respect to \(\alpha\). In an ideal case where the \(t\leq T\), the constraint is already met so we do nothing. In the unfortunate case when the constraint is still not met, we calculate for the dot product of the two gradients to determine the agreement in their directions. If \(g_{\mathcal{Loss}}\cdot g_{Const}\geq 0\) (i.e., the angle between two gradients is less than 90), it means gradient descent update will contribute towards satisfying the constraint. Thus it is interpreted as an agreement in direction and the same \(g_{\mathcal{Loss}}\) is used unmodified. Figure 2 (b) depicts this scenario. However, if they disagree as illustrated in Figure 2 (c) (i.e., \(g_{\mathcal{Loss}}\cdot g_{Const}<0\)), we force the gradient to shift its direction by \(m_{\alpha}\), which is obtained from \((m_{\alpha}+g_{\mathcal{Loss}})\cdot g_{Const}\geq 0\) to guarantee decrease in target cost after gradient descent. It can be reformulated as \(m_{\alpha}\cdot g_{Const}+g_{\mathcal{Loss}}\cdot g_{Const}=\delta\) where \(\delta\geq 0\) is a small value for ensuring gradual movement towards satisfying the constraint. For updating \(\alpha\) and \(w\), we solve for optimal \(m_{\alpha}\) with respect to \(\alpha\), which are the parameters for the network architecture. To minimize the effect of \(m_{\alpha}\) on \(g_{\mathcal{Loss}}\), we use a pseudoinverse-based solution that is known to minimize the size of \(||m_{\alpha}||_{2}^{2}\) as below: \[m_{\alpha}^{*}=\frac{-(g_{\mathcal{Loss}}\cdot g_{Const})+\delta}{||g_{Const }||_{2}^{2}}g_{Const}. \tag{7}\] In order to control the magnitude of the pull, we use a small multiplying factor \(p>0\) on \(\delta\). The policy for updating \(\delta\) using \(p\) is as follows: Some initial value \(\delta_{0}\) exists for \(\delta\). If the target metric fails to meet the constraint, \(\delta\) is multiplied by \(1+p\) to strengthen the pull (\(\delta^{\prime}=(1+p)\delta\)). In the other case when the constraint is satisfied, \(\delta\) is reset to its initial value (\(\delta^{\prime}=\delta_{0}\)). Note that we also train \(v\), weights for the hardware generator using gradient descent. Thus we compute for \(m_{v}^{*}\) in the same manner, but use \(g_{Cost_{HW}}\) in place of \(g_{\mathcal{Loss}}\) for updating the generator. Although a single constraint is already a challenging target, our method can be further generalized to accommodate multiple constraints. Now the gradient is modified only in the direction of individual constraints that do not comply. We provide a more generalized formulation: \[g=\begin{cases}g_{\mathcal{Loss}}&,\text{if }\bigwedge_{i=1}^{n}(t_{i}\leq T _{i})\\ &\text{or }\bigvee_{i=1}^{n}(t_{i}>T_{i})\wedge g_{\mathcal{Loss}}\cdot g_{Const }\geq 0,\\ m_{\alpha}+g_{\mathcal{Loss}}&,\text{otherwise}\end{cases} \tag{8}\] \[g_{Const}=\frac{\partial\sum_{i=1}^{n}\max(t_{i}-T_{i},0)}{\partial\alpha}. \tag{9}\] ### Implementation Details **Hardware cost function**. In this work, we choose the inference latency, energy, and the chip area as the widely used hardware metrics. Considering all of them, a commonly used cost function is multiplying them (i.e., EDP, EDAP) as in (Gang et al., 2016; Wang et al., 2016). However, we found that the energy is usually easier to optimize for, and using such cost function unfairly favors energy-oriented designs. Therefore, we use a balanced weighted sum for the cost function as below. \[Cost_{HW}=C_{E}Energy+C_{L}Latency+C_{A}Area. \tag{10}\] **Estimator and Generator Network**. Following (Gang et al., 2016), we model both the estimator and generator with five-layer Multi-Layer Perceptron (MLP) with residual connections. To train the estimator, we first build a dataset by randomly sampling 10.8M network-accelerator pairs (2.95e\(-9\) % of the total search space) from our search space which are evaluated on hardware metrics using Timeloop (Timeteloop, 2016) and Accelergy (Miller et al., 2017). Using this dataset, the estimator Figure 2. Overall structure of HDX. is trained for 200 epochs with the batch size of 256. The weight update is done using Adam optimizer with the learning rate of 1e-4. The accuracy of the estimator was over 99% for all metrics, being powerful enough as an engine for co-exploration. The generator is randomly initialized and jointly trained with the NAS supernet. As the manipulated gradient from the hard-constraint is back-propagated, the generator learns to create accelerators that comply with the constraint on given neural network architecture. **Search Space.** We use ProxylessNAS (He et al., 2017) as a NAS backbone with path sampling to train \(\alpha\). It consists of multiple settings of MBConv operation with kernel size (He et al., 2017; He et al., 2017; He et al., 2018) and expand ratio (He et al., 2017; He et al., 2018). The total number of layers is 18 and 21 for CIFAR-10 (He et al., 2018) and ImageNet (He et al., 2018) dataset, respectively. However, our method is orthogonal to the NAS implementation and has the flexibility to choose from any differentiable NAS algorithms, such as DARTS (Krizhevsky et al., 2014) or OFA (He et al., 2017). We use Eyeriss (Eyeriss, 2015) as the accelerator's backbone architecture. It is composed of a two-dimensional Processing Element (PE) array where each PEs has a Multiply-Accumulate (MAC) unit attached to a register file. Therefore, hardware accelerator design space comprises PE array size from 12x8 to 20x24, register file size per PE from 16B to 256B. In addition, the search space includes dataflow of Weight-Stationary (WS) similar to (He et al., 2017), Output-Stationary (OS) similar to (He et al., 2017) and Row-Stationary (RS) similar to (Eyeriss, 2015). ## 5. Experiments ### Experimental Environment We have conducted experiments on HDX using CIFAR-10 (He et al., 2018) and ImageNet (He et al., 2018) dataset. For all the hardware metrics (latency, energy, and chip area) reported, we have used the direct evaluation on the designed hardware from Timeloop (Tel Figure 3 (left) and (mid) show the relation between error and latency. The colored horizontal bars represent the two latency targets we applied. It can be easily seen that all solutions found by HDX satisfy the given hard constraints regardless of the value of \(\lambda_{Cost}\). Furthermore, all solutions have the latency right below the constraint, showing that the solutions did not over-optimize for the constrained metric (latency). DANCE (DANCE, 2017) and Auto-NBA (Beng et al., 2017) were able to exploit the trade-off between hardware metrics and accuracy, but has no control over meeting the constraint. Even with soft-constraint terms, they mostly failed to obtain in-constraint solutions. Auto-NBA at a glance seems to be slightly better at meeting the constraints, but it is because its baseline method favors hardware-efficient solutions over high-accuracy ones, not because of its ability to meet the constraint, exemplified by the fact that there is no solution with high accuracy, or latency under \(16.6\,\mathrm{ms}\). ### Solution Quality Found by HDX In this subsection, we demonstrate that HDX can 1) handle constraints from all three metrics (latency, energy, and chip area), 2) handle multiple constraints, and 3) obtain solutions of good overall quality. Figure 3 (right) plots \(Cost_{HW}\) and error together, which allows evaluating quality of the solutions in terms of Pareto-optimality. Because Figure 3 (left) and (mid) overlook the other metrics, comparing the \(Cost_{HW}\) together is required to be fair. From the plot, it is clear that quality of solutions from HDX is better than the NAS\(\rightarrow\)HW method, and has no degradation from the existing co-exploration methods. In fact, the tightly constrained (\(16.6\mathrm{ms}\)) solutions even find better solution than those of the existing solutions in terms of Pareto-optimality. To further study the quality of the solutions found by HDX, we have conducted another set of experiments. We selected a few solutions found from DANCE method as 'Anchor' solutions and listed them in Table 2. From those, we chose either one or all three of the hardware metrics to be fixed as the hard constraint, and performed co-explorations using HDX. Because it is guaranteed that such solution exists, a good method should be able to find a solution meeting the constraint, of at least a similar quality. As in the Section 5.3, all of the 8 cases we have examined succeeded in finding a valid solution. Furthermore, all the solutions show similar global loss values from the anchor solutions as shown in the rightmost column. ### Results from ImageNet Dataset Table 3 shows the co-exploration results from ImageNet dataset (Dosov et al., 2017), under \(125\) ms constraint. As displayed in the table, HDX always succeeded in finding a solution within constraint where the others often failed to satisfy. Furthermore, the Top-1 error and the global loss shows that the quality of the solution found by HDX is not compromised at all, compared to DANCE or its variant. ### Sensitivity Study on Pulling Magnitude In HDX, the only hyperparameter is \(p\) that controls the pulling magnitude. Figure 4 illustrates how the global loss and latency changes over latency-constrained (\(33.3\) ms) explorations, with varying \(p\) of \(1\)e-\(2\), \(7\)e-\(3\), and \(4\)e-\(3\). Regardless of the value of \(p\), the curve for the constrained value shows a similar trend. At the beginning, the global loss becomes mainly optimized, while the latency stays steady. During this phase the pulling magnitude \(\delta\) (See Eq. 7) is still growing, and is not strong enough to make meaningful changes. At certain point, \(\delta\) becomes strong enough to pull the solution towards lowering latency. When the latency satisfies the constraint, global loss starts to decrease while maintaining the latency. There is no significant discrepancy between the final solution in the global loss and the latency, which shows that HDX is insensitive to the hyperparameter \(p\). \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Index & Constrained & Lat. (ms) & E (mJ) & Area (mm\({}^{2}\)) & Error (\%) & \(Cost_{HW}\) & Loss \\ \hline \multirow{4}{*}{A} & Anchor & 69.23 & 37.00 & 2.53 & \(4.10\pm 0.16\) & 21.84 & 0.632 \\ & Latency & **43.99** & 21.79 & 2.10 & \(4.20\pm 0.07\) & 13.87 & 0.624 \\ & Energy & 51.98 & **29.18** & 2.53 & \(4.38\pm 0.17\) & 17.44 & 0.630 \\ & Chip Area & 64.00 & 34.82 & 2.53 & \(4.05\pm 0.06\) & 20.56 & 0.629 \\ & All & **63.72** & **12.09** & 1.86 & \(4.12\pm 0.18\) & 13.29 & 0.623 \\ \hline \multirow{4}{*}{B} & Anchor & 49.65 & 27.53 & 2.53 & \(4.22\pm 0.06\) & 16.67 & 0.638 \\ & Latency & **48.02** & 27.33 & 2.53 & \(4.27\pm 0.09\) & 16.41 & 0.644 \\ \cline{1-1} & Energy & 95.02 & **24.45** & 1.89 & \(4.05\pm 0.10\) & 20.76 & 0.648 \\ \cline{1-1} & Chip Area & 54.74 & 29.81 & 2.53 & \(4.11\pm 0.13\) & 17.96 & 0.645 \\ \cline{1-1} & All & **41.32** & **8.59** & 1.86 & \(4.35\pm 0.05\) & 9.50 & 0.629 \\ \hline \hline \end{tabular} * "Bold colored numbers indicate that they are under constraint of the same colored non-bold numbers. \end{table} Table 2. Results Showing the Quality of Solutions Figure 4. Sensitivity to \(p\) on HDX. The red lines represents latency constraint at 33.3 ms. \begin{table} \begin{tabular}{c c c c c} \hline \hline Method & in-const\({}^{2}\) & Lat. (ms) & Error (\%) & CostHW & Loss \\ \hline \multirow{2}{*}{NAS\(\rightarrow\)HW} & ✗ & 242.92 & 24.84 & 46.29 & 1.99 \\ & ✗ & 135.39 & 28.83 & 24.26 & 2.17 \\ \hline \multirow{2}{*}{DANCE} & ✗ & 165.98 & 25.46 & 28.37 & 2.04 \\ & ✓ & 125.18 & 25.28 & 25.32 & 2.09 \\ \hline \multirow{2}{*}{DANCE;Soft Coast.} & ✗ & 188.69 & 25.69 & 33.14 & 1.99 \\ & ✓ & 105.65 & 26.37 & 25.58 & 2.08 \\ \hline \multirow{2}{*}{**HDX (Proposed)**} & ✓ & 92.06 & 25.01 & 24.48 & 1.98 \\ \cline{1-1} & ✓ & 112.11 & 25.20 & 22.63 & 2.00 \\ \hline \hline \end{tabular} \end{table} Table 3. Experimental Results for ImageNet Figure 3. Co-exploration results. (left) and (mid) represent the latency and (right) represent the hardware cost. Colored marks are methods with constraints of the same color. ### Analysis on the Searched Solutions Fig. 5 visualizes the network and accelerator searched for 60 fps (a) and 30 fps (b) constraints. For the found design pair (a), the design contains relatively smaller kernels, more layers, and a powerful accelerator. To meet a tight constraint while maintaining accuracy, the network has small kernels, mainly of 3\(\times\)3. Using smaller kernels quadratically reduces the number of multiplications. Therefore, decreasing the kernel size and increasing number of layers is a good choice for reducing inference latency. Looking at the accelerator design, it has relatively large PE array (16\(\times\)16) to achieve low latency. It takes weight stationary (WS) dataflow, which is known to have low latency. In addition, there are some kernels with high channel expand ratio in the network. WS exploits channel parallelism for fast execution, and thus has advantage over the found network. On the other hand, in the design for 30 fps (b), the design settles at a solution that can optimize the energy consumption while satisfying the constraint. The design uses larger kernels in the network and row stationary (RS) dataflow in the accelerator. RS is known to have good energy efficiency (Cheng et al., 2019), and exploits parallelism from spatial dimensions of kernel and the activation. Thus, having larger kernels have advantages on RS dataflow. To reduce the energy consumption, the design has fewer PEs (12\(\times\)8), larger RFs to save off-chip access energy, and fewer layers in the network. ## 6. Conclusion In this paper, we proposed HDX, a hard-constrained differentiable co-exploration method. By conditionally applying gradient manipulation that moves the solution towards meeting the constraints, hard constraints can be reliably satisfied with high-quality solutions. We believe this proposal would ease the development of DNN based systems by a significant amount. ###### Acknowledgements. This work has been supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (2022R1C1C1008131, 2022R1C1C1011307), and Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (2020-0-01361, Artificial Intelligence Graduate School Program (Yonsei University)).
2306.03698
Fine-grained Expressivity of Graph Neural Networks
Numerous recent works have analyzed the expressive power of message-passing graph neural networks (MPNNs), primarily utilizing combinatorial techniques such as the $1$-dimensional Weisfeiler-Leman test ($1$-WL) for the graph isomorphism problem. However, the graph isomorphism objective is inherently binary, not giving insights into the degree of similarity between two given graphs. This work resolves this issue by considering continuous extensions of both $1$-WL and MPNNs to graphons. Concretely, we show that the continuous variant of $1$-WL delivers an accurate topological characterization of the expressive power of MPNNs on graphons, revealing which graphs these networks can distinguish and the level of difficulty in separating them. We identify the finest topology where MPNNs separate points and prove a universal approximation theorem. Consequently, we provide a theoretical framework for graph and graphon similarity combining various topological variants of classical characterizations of the $1$-WL. In particular, we characterize the expressive power of MPNNs in terms of the tree distance, which is a graph distance based on the concept of fractional isomorphisms, and substructure counts via tree homomorphisms, showing that these concepts have the same expressive power as the $1$-WL and MPNNs on graphons. Empirically, we validate our theoretical findings by showing that randomly initialized MPNNs, without training, exhibit competitive performance compared to their trained counterparts. Moreover, we evaluate different MPNN architectures based on their ability to preserve graph distances, highlighting the significance of our continuous $1$-WL test in understanding MPNNs' expressivity.
Jan Böker, Ron Levie, Ningyuan Huang, Soledad Villar, Christopher Morris
2023-06-06T14:12:23Z
http://arxiv.org/abs/2306.03698v2
# Fine-grained Expressivity of Graph Neural Networks ###### Abstract Numerous recent works have analyzed the expressive power of message-passing graph neural networks (MPNNs), primarily utilizing combinatorial techniques such as the 1-dimensional Weisfeiler-Leman test (1-WL) for the graph isomorphism problem. However, the graph isomorphism objective is inherently binary, not giving insights into the degree of similarity between two given graphs. This work resolves this issue by considering continuous extensions of both 1-WL and MPNNs to graphons. Concretely, we show that the continuous variant of 1-WL delivers an accurate topological characterization of the expressive power of MPNNs on graphons, revealing which graphs these networks can distinguish and the level of difficulty in separating them. We identify the finest topology where MPNNs separate points and prove a universal approximation theorem. Consequently, we provide a theoretical framework for graph and graphon similarity combining various topological variants of classical characterizations of the 1-WL. In particular, we characterize the expressive power of MPNNs in terms of the tree distance, which is a graph distance based on the concepts of fractional isomorphisms, and substructure counts via tree homomorphisms, showing that these concepts have the same expressive power as the 1-WL and MPNNs on graphons. Empirically, we validate our theoretical findings by showing that randomly initialized MPNNs, without training, exhibit competitive performance compared to their trained counterparts. Moreover, we evaluate different MPNN architectures based on their ability to preserve graph distances, highlighting the significance of our continuous 1-WL test in understanding MPNNs' expressivity. ## 1 Introduction Graph-structured data is widespread across several application domains, including chemo- and bioinformatics [11, 101], image analysis [107], and social-network analysis [38], explaining the recent growth in developing and analyzing machine learning methods tailored to graphs. In recent years, _message-passing graph neural networks_ (MPNNs) [25, 50, 92] emerged as the dominant paradigm, and alongside the growing prominence of MPNNs, numerous works [89, 115] analyzed MPNNs' expressivity. The analysis, typically based on combinatorial techniques such as the _\(1\)-dimensional Weisfeiler-Leman test_ (\(1\)-WL) for the graph isomorphism problem [113, 114], provides explanations of MPNNs' limitations (see [92] for a thorough survey). However, since the graph isomorphism problem only concerns whether the graphs are exactly the same, it only gives insights into MPNNs' ability to distinguish graphs. Hence, such approaches cannot quantify the graphs' degree of similarity. Nonetheless, understanding the similarities induced by MPNNs is crucial for precisely quantifying their generalization abilities [82, 94], stability [44], or robustness properties [60]. Present work.To address these shortcomings, we show how to integrate MPNNs into the theory of _iterated degree measures_ first developed by Grebik and Rocha [52], which generalizes the \(1\)-WL and its characterizations to graphons [78]. Integrating MPNNs into this theory allows us to identify the finest topology in which MPNNs separate points, allowing us to prove a universal approximation theorem for graphons. Inspired by the Weisfeiler-Leman distance [26], we show that metrics on measures also integrate beautifully into the theory of iterated degree measures. Concretely, we define metrics \(\delta_{\mathsf{P}}\) via the _Prokhorov metric_[99] and \(\delta_{\mathsf{W}}\) via an _unbalanced Wasserstein metric_[97] that metrize the compact topology of iterated degree measures. _By leveraging this theory, we show that two graphons are close in these metrics if, and only if, the output of all possible MPNNs, up to a specific Lipschitz constant and number of layers, is close as well._ This refines the result of Chen et al. [26], which shows only one direction of this equivalence, i.e., graphs similar in their metric produce similar MPNN outputs. We focus on graphons without node feature information to focus purely on MPNNs' ability to distinguish their structure. Our main result offers a topological generalization of classical characterizations of the \(1\)-WL, showing that the above metrics represent the optimal approach for defining a metric variant of the \(1\)-WL. Informally, the main result states the equivalence of our metrics \(\delta_{\mathsf{P}}\) and \(\delta_{\mathsf{W}}\) on iterated degree measures, the tree distance \(\delta_{\Box}^{\mathcal{T}}\) of Boker [18], the Euclidean distance of MPNNs' output, and tree homomorphism densities. These metrics arise from the topological variants of the \(1\)-WL test, fractional isomorphisms, MPNNs, and tree homomorphism counts. **Theorem 1** (informal).: The following are equivalent for all graphons \(U\) and \(W\): 1. \(U\) and \(W\) are close in \(\delta_{\mathsf{P}}\) (or alternatively \(\delta_{\mathsf{W}}\)). 2. \(U\) and \(W\) are close in \(\delta_{\Box}^{\mathcal{T}}\). 3. MPNN outputs on \(U\) and \(W\) are close for all MPNNs with Lipschitz constant \(C\) and \(L\) layers. 4. Homomorphism densities in \(U\) and \(W\) are close for all trees up to order \(k\). Up to now, except for the connection between the tree distance and tree homomorphism densities by Boker [18], these equivalences were only known to hold on a discrete level where graphs are either exactly isomorphic or not. The "closeness" statements in the above theorem are epsilon-delta statements, i.e., for every \(\varepsilon>0\), there is a \(\delta>0\) such that, if graphons are \(\delta\)-close in one distance measure, they are \(\varepsilon\)-close in the other distance measures, where the constants are independent of the actual graphons. In particular, for graphs, these constants are independent of their number of vertices. Theorem 1 is formally stated and proved in Appendix C.4. A key point in the proof is to consider compact operators (graphons) as limits of graphs. Empirically, we verify our findings by demonstrating that untrained MPNNs yield competitive predictive performance on established graph-level prediction tasks. Further, we evaluate the usefulness of our derived metrics for studying different MPNN architectures. Our theoretical and empirical results also provide an efficient lower bound of the graph distances in Boker [18], Chen et al. [26] by using the Euclidean distance of MPNN outputs. In summary, we quantify which distance MPNNs induce, leading to a more fine-grained understanding of their expressivity and separation capabilities. Our results provide a deeper understanding of MPNNs' capacity to capture graph structure, precisely determining when they can and when they cannot assign similar and dissimilar vectorial representations to graphs. _Our work establishes the first rigorous connection between the similarity of graphs and their learned vectorial presentations, paving the way for a more detailed understanding of MPNNs' expressivity and their connection to graph structure._ ### Related work and motivation In the following, we discuss relevant related work and provide additional background and motivation. MPNNs.Following Gilmer et al. [50], Scarselli et al. [105], MPNNs learn a vectorial representation, i.e., a \(d\)-dimensional real-valued vector, representing each vertex in a graph by iteratively aggregating information from neighboring vertices. Subsequently, MPNNs compute a single vectorial representation of a given graph by aggregating these vectorial vertex representations. Notable instances of this architecture include, e.g., Duvenaud et al. [36], Hamilton et al. [61], and Velickovic et al. [111], which can be subsumed under the message-passing framework introduced in Gilmer et al. [50]. In parallel, approaches based on spectral information were introduced in, e.g., Bruna et al. [23], Defferrard et al. [33], Gama et al. [43], Kipf and Welling [73], Levie et al. [75], and Monti et al. [87]--all of which descend from early work in Baskin et al. [14], Goller and Kuchler [51], Kireev [74], Merkwirth and Lengauer [84], Micheli [85], Micheli and Sestito [86], Scarselli et al. [105], and Sperduti and Starita [108]. Expressivity and limitations of MPNNs.The _expressivity_ of an MPNN is the architecture's ability to express or approximate different functions over a domain, e.g., graphs. High expressivity means the neural network can represent many functions over this domain. In the literature, the expressivity of MPNNs is modeled mathematically based on two main approaches, algorithmic alignment with graph isomorphism test [88] and universal approximation theorems [7, 48]. Works following the first approach study if an MPNN, by choosing appropriate weights, can distinguish the same pairs of non-isomorphic graphs as the \(1\)-\(\mathsf{WL}\) or its more powerful generalization the \(k\)-\(\mathsf{WL}\). Here, an MPNN distinguishes two non-isomorphic graphs if it can compute different vectorial representations for the two graphs. Specifically, Morris et al. [89] and Xu et al. [115] showed that the \(1\)-\(\mathsf{WL}\) limits the expressive power of any possible MPNN architecture in distinguishing non-isomorphic graphs. In turn, these results have been generalized to the \(k\)-\(\mathsf{WL}\), see, e.g., Azizian and Lelarge [7], Geerts [47], Maron et al. [81], Morris et al. [89, 91, 93]. Works following the second approach study, which functions over the domain of graphs, can be approximated arbitrarily close by an MPNN [7, 28, 48, 79]; see also next paragraph. Further, see Appendix B for an extended discussion of related works about MPNNs' expressivity. Limitations of current universal approximation theorems for MPNNs._Universal approximation theorems_ assume that the domain of the network is a compact metric space and show that a neural network can approximate any continuous function over this space. Current approaches studying MPNNs' universal approximation capabilities employ the (graph) edit distance to define the metric on the space of graphs, e.g., see [7]. However, the edit distance is not a natural notion of similarity for practical machine learning on graphs. That is, any two graphs on a different number of vertices are far Figure 1: Illustration of the procedure to compute the distance \(\delta_{\mathsf{P}}\) between graphs \(G\) and \(H\). Columns A and C show the colors obtained by \(1\)-\(\mathsf{WL}\) iterations on graphs \(G\) and \(H\), respectively. Columns B and D show the iterated degree measures (IDMs) \(\mathsf{i}_{G,h}\) and \(\mathsf{i}_{H,h}\) for iterations \(h=1,2\) (see Equation (1)), and the output distributions of iterated degree measures (DIDMs) \(\nu_{G}\) and \(\nu_{H}\) (see Eq. (2)). Column E depicts the recursive construction to compute the distance \(\delta_{\mathsf{P}}\) between the IDMs from columns B and D (outlined in Section 3, detailed in Appendix C.2.2). apart, not fully reflecting the similarity of real-world graphs. More generally, the same holds for any pair of non-isomorphic graphs. Hence, any rewiring of a graph leads to a far-away graph. However, we would like to interpret the rewiring of a small number of edges in a large graph as a small perturbation close to the original graph. Additionally, since the edit metric is not compact, the Stone-Weierstrass theorem, the primary tool in universal approximation analysis, cannot be applied directly to the whole space of graphs. For example, [26] uses other non-compact metrics to circumvent this, artificially choosing a compact subset of graphs from the whole space by uniformly limiting the size of the graphs and the edge weights. Alternatively, Chen et al. [28] resorted to the graph signal viewpoint, allowing real-valued edge features, and showed that the algorithmic alignment of GNNs with graph isomorphism algorithms can be utilized to prove universal approximation theorems for MPNNs. In contrast, here we suggest using graph similarity measures from graphon analysis, for which simple graphs of arbitrarily different sizes can be close to each other and by which the space of all graphs is dense in the compact metric space of graphons, allowing us to use the Stone-Weierstrass theorem directly; see also Appendix B.2 for an extended discussion on graph metrics beyond the edit distance. Graphon theory.The book of Lovasz [78] provides a thorough treatment of _graphons_, which emerged as limit objects for sequences of graphs in the theory of dense graph limits developed by Borgs et al. [20, 21], Lovasz and Szegedy [77]. These limit objects allow the completion of the space of all graphs to a compact metric space. When endowed with the _cut distance_, first introduced by Frieze and Kannan [41], the space of graphons is compact [78, Theorem 9.23]). We can interpret graphons in two ways. First, as a weighted graph on the continuous vertex set \([0,1]\). Secondly, we can think of every point from \([0,1]\) as an infinite set of vertices and two points \([0,1]\) as being connected by a random bipartite graph with edge density given by the graphon. This second point of view naturally leads to using graphons as generative models of graphs and the theory of graph limits. Grebik and Rocha [52] generalized the \(1\)-WL test and various of its characterizations to graphons, while Boker [19] did this for the \(k\)-WL test. Graphon theory in graph machine learning.Keriven et al. [64], Maskey et al. [83], Ruiz et al. [103] use graphons to analyze graph signal processing and MPNNs. These papers assume a single fixed graphon generating the data, i.e., any graph from the dataset is randomly sampled from this graphon, and showed that spectral MPNNs on graphs converge to spectral MPNNs on graphons as the size of the sampled graphs increases. Maskey et al. [82] developed a generalization analysis of MPNNs, assuming a pre-defined finite set of graphons. Further, Keriven et al. [65] compared the expressivity of two types of spectral MPNNs on spaces of graphons, assuming graphons are Lipschitz continuous kernels. To that, the metric on the graphon space is taken as the \(L_{\infty}\) distance between graphons as functions. However, the paper does not directly characterize the separation power of the studied classes of MPNNs, and it requires the choice of an arbitrary compact subset to perform the analysis. In contrast, in the current paper, we use graphon analysis to endow the domain of definition of MPNNs, the set of all graphs, with a "well-behaved" structure describing a notion of natural graph similarity and allowing us to analyze properties of MPNNs regardless of any model of the data distribution. ## 2 Background Here, we provide the necessary background and define notation. Analysis.We denote the Lebesgue measure on \([0,1]\) by \(\lambda\) and consider measurability w.r.t. the Borel \(\sigma\)-algebra on \([0,1]\). Let \((X,\mathcal{B})\) be a standard Borel space, where we sometimes just write \(X\) when \(\mathcal{B}\) is understood and then use \(\mathcal{B}(X)\) to explicitly denote \(\mathcal{B}\). For a measure \(\mu\) on \(X\), we let \(\|\mu\|\coloneqq\mu(X)\) denote its _total mass_, and for a standard Borel space \((Y,\mathcal{C})\) and a measurable map \(f\colon X\to Y\), let the _push-forward \(f_{*}\mu\) of \(\mu\) via \(f\)_ be defined by \(f_{*}\mu(A)\coloneqq\mu(f^{-1}(A))\) for every \(A\in\mathcal{C}\). Let \(\mathscr{P}(X)\) and \(\mathscr{M}_{\leq 1}(X)\) denote the spaces of all probability measures on \(X\) and all measures of total mass at most one on \(X\), respectively. Let \(C_{b}(X)\) denote the set of all bounded continuous real-valued functions on \(X\). We endow \(\mathscr{P}(X)\) and \(\mathscr{M}_{\leq 1}(X)\) with the topology generated by the maps \(\mu\mapsto\int_{X}fd\mu\) for \(f\in C_{b}(X)\), the _weak topology_ (_weak* topology_ in functional analysis); see [63, Section 17.E] or [17, Chapter 8]. Then, \(\mathscr{P}(X)\) and \(\mathscr{M}_{\leq 1}(X)\) are again standard Borel spaces, and if \(K\) is a compact metric space, then \(\mathscr{P}(K)\) and \(\mathscr{M}_{\leq 1}(K)\) are compact metrizable; see [63, Theorem 17.22]. For a sequence \((\mu_{i})_{i}\) of measures and a measure \(\mu\), we have \(\mu_{i}\to\mu\) if and only if \(\int_{X}fd\mu_{i}\to\int_{X}fd\mu\) for every \(f\in C_{b}(X)\), and for measures \(\mu,\nu\), we have \(\mu=\nu\) if and only if \(\int_{X}fd\mu=\int_{X}fd\nu\) for every \(f\in C_{b}(X)\). In both statements, we may replace \(C_{b}(X)\) by a dense (w.r.t. the sup norm) subset. See Appendix A.3 for basics on topology. Graphs and graphons.A _graph_\(G\) is a pair \((V(G),E(G))\) with _finite_ sets of _vertices_ or _nodes_\(V(G)\) and _edges_\(E(G)\subseteq\{\{u,v\}\subseteq V(G)\mid u\neq v\}\). If not otherwise stated, we set \(n\coloneqq|V(G)|\), and the graph is of _order_\(n\). We also call the graph \(G\) an \(n\)-order graph. For ease of notation, we denote the edge \(\{u,v\}\) in \(E(G)\) by \(uv\) or \(vu\). The _neighborhood_ of \(v\) in \(V(G)\) is denoted by \(N(v)\coloneqq\{u\in V(G)\mid vu\in E(G)\}\) and the _degree_ of a vertex \(v\) is \(|N(v)|\). Two graphs \(G\) and \(H\) are _isomorphic_ and we write \(G\simeq H\) if there exists a bijection \(\varphi\colon V(G)\to V(H)\) preserving the adjacency relation, i.e., \(uv\) is in \(E(G)\) if and only if \(\varphi(u)\varphi(v)\) is in \(E(H)\). Then \(\varphi\) is an _isomorphism_ between \(G\) and \(H\). A _kernel_ is a measurable function \(U\colon[0,1]^{2}\to\mathbb{R}\), and a symmetric measurable function \(W\colon[0,1]^{2}\to[0,1]\) is called a _graphon_. The set of all graphons is denoted by \(\mathcal{W}\). Graphons generalize graphs in the following way. Every graph \(G\) can be viewed as a graphon \(W_{G}\) by partitioning \([0,1]\) into \(n\) intervals \((I_{v})_{v\in V(G)}\), each of mass \(1/n\), and letting \(W_{G}(x,y)\) for \(x\in I_{u},y\in I_{v}\) be one or zero depending on whether \(uv\) is an edge in \(G\) or not. The _homomorphism density_\(t(F,W)\) of a graph \(F\) in a graphon \(W\) is \(t(F,W)\coloneqq\int_{[0,1]^{V(F)}}\prod_{ij\in E(F)}W(x_{i},x_{j})\,d(\bar{x})\), where \(\bar{x}\) is the tuple of variables \(x_{v}\) for \(v\in V(G)\). Iterated measures.Here, we define _iterated degree measures (IDMs)_, which is basically a sequence of measures, by adapting the definition of Grebik and Rocha [52]. Let \(\mathbb{M}_{0}\coloneqq\{1\}\) and inductively define \(\mathbb{M}_{h+1}\coloneqq\mathscr{M}_{\leq 1}(\mathbb{M}_{h})\) for every \(h\geq 0\). Then, the spaces \(\mathbb{M}_{0},\mathbb{M}_{1},\dots\) are all compact metrizable. For \(0\leq h<\infty\), inductively define the _projection_\(p_{h+1,h}\colon\mathbb{M}_{h+1}\to\mathbb{M}_{h}\) by letting \(p_{1,0}\) be the trivial map and, for \(h>0\), letting \(p_{h+1,h}(\alpha)\coloneqq(p_{h,h-1})_{*}\alpha\) for every \(\alpha\in\mathbb{M}_{h+1}=\mathscr{M}_{\leq 1}(\mathbb{M}_{h})\). This extends to \(p_{h,\ell}\colon\mathbb{M}_{h}\to\mathbb{M}_{\ell}\) for \(0\leq\ell\leq h<\infty\) by composition in the intuitive way. Let \[\mathbb{M}\coloneqq\mathbb{M}_{\infty}\coloneqq\{(\alpha_{h})_{h}\in\prod_{h \in\mathbb{N}}\mathbb{M}_{h}\mid p_{h+1,h}(\alpha_{h+1})=\alpha_{h}\text{ for every }h\in\mathbb{N}\}\] be the _inverse limit_ of \(\mathbb{M}_{0},\mathbb{M}_{1},\dots\); see the Kolmogorov Consistency Theorem [63, Theorem 17.20]). Then, \(\mathbb{M}\) is compact metrizable [52, Claim 6.2]. For \(0\leq h<\infty\), let \(p_{\infty,h}\colon\mathbb{M}\to\mathbb{M}_{h}\) denote the projection to the \(h\)-th component. We remark that we slightly simplified the definition of Grebik and Rocha [52] by not including previous IDMs from \(\mathbb{M}_{h^{\prime}}\) for \(h^{\prime}\leq h\) in \(\mathbb{M}_{h+1}\) and directly defining \(\mathbb{M}\) as the inverse limit; corresponding to the definition of the space \(\mathbb{P}\) in [52]. These changes yield equivalent definitions that simplify the exposition. The \(1\)-WL for graphons.See Appendix A.1 for the standard definition of \(1\)-WL on graphs. Grebik and Rocha [52] generalized \(1\)-WL to graphons by defining a map \(\mathrm{i}_{W}\colon[0,1]\ \to\mathbb{M}\), mapping every point of the graphon \(W\in\mathcal{W}\) to an iterated degree measure as follows. First, inductively define the map \(\mathrm{i}_{W,h}\colon[0,1]\to\mathbb{M}_{h}\) by setting \(\mathrm{i}_{W,0}(x)\coloneqq 1\) for every \(x\in[0,1]\) and \[\mathrm{i}_{W,h+1}(x)\coloneqq A\mapsto\int_{\mathrm{i}_{W,h}^{-1}(A)}W(x,y)d \mu(y), \tag{1}\] for all \(x,y\in[0,1],A\in\mathcal{B}(\mathbb{M}_{h})\), and \(h\geq 0\). Intuitively, \(\mathrm{i}_{W,h}(x)\) is the color assigned to point \(x\) after \(h\) iterations of \(1\)-WL. Observe that \(\mathrm{i}_{W,1}(x)\) encodes the degree of point \(x\), and \(\mathrm{i}_{W,h}(x)\) for \(h>1\) represents the iterated degree sequence information. Then, we define \(\mathrm{i}_{W}\coloneqq\mathrm{i}_{W,\infty}\colon[0,1]\to\mathbb{M}\) by \(\mathrm{i}_{W}(x)\coloneqq\mathrm{i}_{W,\infty}(x)\coloneqq\prod_{h\in\mathbb{ N}}\mathrm{i}_{W,h}(x)\) and let \[\nu_{W}\coloneqq\nu_{W,\infty}\coloneqq(\mathrm{i}_{W})_{*}\lambda\in \mathscr{P}(\mathbb{M}) \tag{2}\] be the _distribution of iterated degree measures (DIDM) of \(W\)_. In other words, \(\nu_{W}(A)\) is the volume that the colors in \(A\) occupy in the graphon domain \([0,1]\). Then, the \(1\)-_WL test on graphons_ is the mapping that takes a graphon \(W\in\mathcal{W}\) and returns \(\nu_{W}\). In addition to \(\nu_{W}\), we also define \(\nu_{W,h}\coloneqq(\mathrm{i}_{W,h})_{*}\lambda\in\mathscr{P}(\mathbb{M}_{h})\) for \(0\leq h<\infty\), corresponding to running \(1\)-WL for \(h\) rounds. While every DIDM of a graphon is a measure from the compact space \(\mathscr{P}(\mathbb{M})\), not every measure in \(\mathscr{P}(\mathbb{M})\) is the DIDM of a graphon. Grebik and Rocha [52] address this by giving a definition of a DIDM that is independent of a specific graphon. For us, it suffices to simply remark that also the set \(\mathbb{D}_{h}\coloneqq\{\nu_{W,h}\mid W\text{ graphon}\}\subseteq\mathscr{P}( \mathbb{M}_{h})\) is compact as it is the image of the compact space of graphons [78, Theorem 9.23] under a continuous function [52]. For us, this means that \(\mathbb{D}_{h}\) and \(\mathscr{P}(\mathbb{M}_{h})\) can be used interchangeably in our arguments and we do not have to be overly careful with distinguishing them. For simplicity, we simply stick to \(\mathscr{P}(\mathbb{M}_{h})\) and refer to all elements of \(\mathscr{P}(\mathbb{M}_{h})\) as _DIDMs_. Message-passing graph neural networks.MPNNs learn a \(d\)-dimensional real-valued vector for each vertex in a graph by aggregating information from neighboring vertices; see Appendix A.2 for more details. Here, we consider MPNNs where both the update functions and the readout functions are Lipschitz continuous and use sum aggregation normalized by the order of the graph. Formally, we first let \(\mathbf{\varphi}=(\varphi_{i})_{i=0}^{L}\) denote a tuple of continuous functions \(\varphi_{0}\colon\mathbb{R}^{0}\to\mathbb{R}^{d_{0}}\) and \(\varphi_{t}\colon\mathbb{R}^{d_{t-1}}\to\mathbb{R}^{d_{t}}\) for \(t\in[L]\), where we simply view \(\varphi_{0}\) as an element of \(\mathbb{R}^{d_{0}}\). Furthermore, let \(\psi\) denote a continuous function \(\psi\colon\mathbb{R}^{d_{L}}\to\mathbb{R}^{d}\). For a graph \(G\), an MPNN initializes a feature \(\mathbf{h}_{v}^{(0)}\coloneqq\varphi_{0}\in\mathbb{R}^{d_{0}}\). Then, for \(t\in[L]\), we compute \(\mathbf{h}_{-}^{(t)}\colon V(G)\to\mathbb{R}^{d_{t}}\) and the single graph-level feature \(\mathbf{h}_{G}\in\mathbb{R}^{d}\) after \(L\) layers by \[\mathbf{h}_{v}^{(t)}\coloneqq\varphi_{t}\bigg{(}\frac{1}{|V(G)|}\sum_{u\in N (v)}\mathbf{h}_{u}^{(t-1)}\bigg{)}\quad\quad\text{and}\quad\quad\mathbf{h}_{G} \coloneqq\psi\bigg{(}\frac{1}{|V(G)|}\sum_{v\in V(G)}\mathbf{h}_{v}^{(L)} \bigg{)}.\] For a graphon \(W\in\mathcal{W}\), an MPNN initializes a feature \(\mathbf{h}_{x}^{(0)}\coloneqq\varphi_{0}\in\mathbb{R}^{d_{0}}\) for \(x\in[0,1]\). Then, for \(t\in[L]\), we compute \(\mathbf{h}_{-}^{(t)}\colon X\to\mathbb{R}^{d_{t}}\) and the single graphon-level feature \(\mathbf{h}_{W}\in\mathbb{R}^{d}\) after \(L\) layers by \[\mathbf{h}_{x}^{(t)}\coloneqq\varphi_{t}\bigg{(}\int_{[0,1]}W(x,y)\mathbf{h}_ {y}^{(t-1)}\,d\lambda(y)\bigg{)}\quad\quad\text{and}\quad\quad\mathbf{h}_{W} \coloneqq\psi\bigg{(}\int_{[0,1]}\mathbf{h}_{x}^{(L)}\,d\lambda(x)\bigg{)}.\] This generalizes the previous definition, i.e., for a graph \(G\) and its (induced) graphon \(W_{G}\), we have \(\mathbf{h}_{G}=\mathbf{h}_{W_{G}}\) and \(\mathbf{h}_{v}^{(t)}=\mathbf{h}_{x}^{(t)}\) for all \(t\in[L]\), \(v\in V(G)\), and \(x\in I_{v}\); see Appendix C.1. We now extend the definition of MPNNs to IDMs. While the above definition of \(\mathbf{h}_{-}^{(t)}\) depends on a specific graphon \(W\), an IDM already carries the aggregated information of its neighborhood. Hence, the initial feature \(\mathbf{h}_{\alpha}^{(0)}\coloneqq\varphi_{0}\in\mathbb{R}^{d_{0}}\) for \(\alpha\in\mathbb{M}_{0}\) and \(\mathbf{h}_{-}^{(t)}\colon\mathbb{M}_{t}\to\mathbb{R}^{d_{t}}\)_are defined for all IDMs at once_. Then, for a DIDM \(\nu\in\mathscr{P}(\mathbb{M}_{L})\), we define the single DIDM-level feature \(\mathbf{h}_{\nu}\in\mathbb{R}^{d}\). Formally, we let \[\mathbf{h}_{\alpha}^{(t)}\coloneqq\varphi_{t}\bigg{(}\int_{\mathbb{M}_{t-1}} \mathbf{h}_{-}^{(t-1)}\,d\alpha\bigg{)}\quad\quad\quad\text{and}\quad\quad \quad\mathbf{h}_{\nu}\coloneqq\psi\bigg{(}\int_{\mathbb{M}_{L}}\mathbf{h}_{-} ^{(L)}\,d\nu\bigg{)}.\] That is, messages are aggregated via the IDM itself. In addition to \(\mathbf{h}_{-}^{(t)}\colon\mathbb{M}_{t}\to\mathbb{R}^{d_{t}}\), and \(\mathbf{h}_{-}\colon\mathscr{P}(\mathbb{M}_{L})\to\mathbb{R}^{d}\), we define \(\mathbf{h}_{-}^{(t)}\colon\mathbb{M}\to\mathbb{R}^{d_{t}}\) and \(\mathbf{h}_{-}\colon\mathscr{P}(\mathbb{M})\to\mathbb{R}^{d}\) by setting \(\mathbf{h}_{\alpha}^{(t)}\coloneqq\mathbf{h}_{p_{\infty,t}(\alpha)}^{(t)}\) for every \(\alpha\in\mathbb{M}\) and \(\mathbf{h}_{\nu}\coloneqq\mathbf{h}_{(p_{\infty,L})\ast\nu}\) for every \(\nu\in\mathscr{P}(\mathbb{M})\); it will always be clear from the context which of these functions we mean. These definitions for IDMs further generalize the previous definitions for graphons. For a graphon \(W\in\mathcal{W}\), we have \(\mathbf{h}_{W}=\mathbf{h}_{\nu_{W,L}}=\mathbf{h}_{\nu_{W}}\) and \(\mathbf{h}_{x}^{(t)}=\mathbf{h}_{i_{W,t}(x)}^{(t)}=\mathbf{h}_{i_{W}(x)}^{(t)}\) for almost every \(x\in[0,1]\); see Appendix C.1. We call a tuple \(\mathbf{\varphi}\) as defined above an _(L-layer) MPNN model_ if \(\varphi_{t}\) is Lipschitz continuous on \(\{\int_{\mathbb{M}_{t-1}}\mathbf{h}_{-}^{(t-1)}\,d\alpha\mid\alpha\in\mathbb{M }_{t}\}\) for every \(t\in[L]\), and we call \(\psi\) as defined above _Lipschitz_ if it is Lipschitz continuous on \(\{\int_{\mathbb{M}_{L}}\mathbf{h}_{-}^{(L)}\,d\nu\mid\nu\in\mathscr{P}(\mathbb{M}_ {L})\}\). We use \(\|-\|_{L}\) to denote the Lipschitz constants on these sets. In this paper, \(\boldsymbol{\varphi}\) and \(\psi\) always denote an MPNN model and a Lipschitz function, respectively. We use the term \(\infty\)-layer MPNN model to refer to an \(L\)-layer MPNN model for an arbitrary \(L\). ## 3 Metrics on iterated degree measures Chen et al. [26] recently introduced the _Weisfeiler-Leman distance_, a polynomial-time computable pseudometric on graphs combining the \(1\)-WL test with the well-known _Wasserstein metric_ from optimal transport [112], where their approach resembles that of iterated degree measures as introduced by Grebik and Rocha [52]. To use metrics from optimal transport, Chen et al. [26] resorted to mean aggregation instead of sum aggregation to obtain probability measures instead of finite measures with total mass at most one. Using mean aggregation, however, is different from the \(1\)-WL test, which relies on sum aggregation. That is, sum aggregation allowed the algorithm to start with a constant coloring, something impossible with mean aggregation, potentially leading to a constant coloring. Chen et al. [26] circumvented this problem by encoding vertex degrees and the total number of vertices in the initial coloring. Here, we show that the _Prokhorov metric_[99] and an unbalanced variant of the Wasserstein metric can be beautifully integrated into the theory of iterated degree measures, eliminating the need to work around the limits of mean aggregation. Both metrics metrize the weak topology, which is precisely the topology the space \(\mathbb{M}_{h}\) of IDMs is endowed with; see Section 2. In modern-day literature, the Prokhorov metric is usually only defined for probability measures [35, Section 11.3], yet the original definition by Prokhorov [99] already was for finite measures. That is, let \((S,d)\) be a complete separable metric space with Borel \(\sigma\)-algebra \(\mathcal{B}\). For a subset \(A\subseteq S\) and \(\varepsilon\geq 0\), let \(A^{\varepsilon}\coloneqq\{y\in S\mid d(x,y)<\varepsilon\text{ for some }x\in A\}\), and define the _Prokhorov metric_\(\mathsf{P}\) on \(\mathscr{M}_{\leq 1}(S)\) by \[\mathsf{P}(\mu,\nu)\coloneqq\inf\{\varepsilon>0\mid\mu(A)\leq\nu(A^{ \varepsilon})+\varepsilon\text{ and }\nu(A)\leq\mu(A^{\varepsilon})+\varepsilon\text{ for every }A\in\mathcal{B}\}.\] As the name suggests, \(\mathsf{P}\) is a metric on \(\mathscr{M}_{\leq 1}(S)\)[99, Section 1.4], and moreover, convergence in \(\mathsf{P}\) is equivalent to convergence in the weak topology [99, Theorem 1.11]. For the _unbalanced Wasserstein metric_ let \(\mu,\nu\in\mathscr{M}_{\leq 1}(S)\), where we assume \(\|\mu\|\geq\|\nu\|\) without loss of generality, and define \[\mathsf{W}(\mu,\nu)\coloneqq\|\mu\|-\|\nu\|+\inf_{\gamma\in\mathcal{M}(\mu,\nu )}\int_{S\times S}d(x,y)\,d\gamma(x,y),\] where \(\mathcal{M}(\mu,\nu)\) is the set of all measures \(\gamma\in\mathscr{M}_{\leq 1}(S\times S)\) such that \((p_{1})_{*}\gamma\leq\mu\) and \((p_{2})_{*}\gamma=\nu\). We prove that \(W\) is a well-defined metric on \(\mathscr{M}_{\leq 1}(S,\mathcal{B})\) that coincides with the Wasserstein distance [35, Section 11.8] on probability measures. Furthermore, it satisfies \(\mathsf{W}(\mu,\nu)\leq 2\mathsf{P}(\mu,\nu)\leq 4\sqrt{\mathsf{W}(\mu,\nu)}\) for all \(\mu,\nu\in\mathscr{M}_{\leq 1}(S)\), which implies that it metrizes the weak topology; see Appendix C.2. The metric \(\mathsf{P}\) is used to define the metric \(d_{\mathsf{P},h}\) on \(\mathbb{M}_{h}\) for \(0\leq h\leq\infty\) as follows. Let \(d_{\mathsf{P},0}\) be the trivial metric on the one-point space \(\mathbb{M}_{0}\) and, for \(h\geq 0\), we inductively let \(d_{\mathsf{P},h+1}\) be the Prokhorov metric on \((\mathbb{M}_{h+1},d_{\mathsf{P},h})\). Then, we define \(d_{\mathsf{P},\infty}\) on \(\mathbb{M}_{\infty}\) by setting \(d_{\mathsf{P}}(\alpha,\beta)\coloneqq d_{\mathsf{P},\infty}(\alpha,\beta) \coloneqq\sup_{h\in\mathbb{N}}\frac{1}{h}\cdot d_{\mathsf{P},h}(\alpha_{h}, \beta_{h})\) for \(\alpha,\beta\in\mathbb{M}_{\infty}\). The factor of \(1/h\) is included in this definition on purpose to ensure that \(d_{\mathsf{P},\infty}\) metrizes the product topology and not the uniform topology. The metric \(d_{\mathsf{W},h}\) on \(\mathbb{M}_{h}\) is defined completely analogously via \(\mathsf{W}\) instead of \(\mathsf{P}\). The metrics \(d_{\mathsf{P},h}\) and \(d_{\mathsf{W},h}\) on \(\mathbb{M}_{h}\) allow us, for example, to compare the IDM of a point in a graphon to the IDM of a point in another graphon. To compare two graphons' distributions on iterated degree measures, we let \(\delta_{\mathsf{P},h}\) be the Prokhorov metric on \((\mathscr{P}(\mathbb{M}_{h}),d_{\mathsf{P},h})\) for \(0\leq h\leq\infty\) and again define \(\delta_{\mathsf{W},h}\) analogously via the distance \(\mathsf{W}\). We note that these metrics directly apply to graphons \(U,W\in\mathcal{W}\) by simply comparing their DIDMs \(\nu_{U,h}\) and \(\nu_{W,h}\). **Theorem 2**.: Let \(0\leq h\leq\infty\). The metrics \(d_{\mathsf{P},h}\) and \(d_{\mathsf{W},h}\) are well-defined and metrize the topology of \(\mathbb{M}_{h}\). The metrics \(\delta_{\mathsf{P},h}\) and \(\delta_{\mathsf{W},h}\) are well-defined and metrize the topology of \(\mathscr{P}(\mathbb{M}_{h})\). Moreover, these metrics are computable on graphs in time polynomial in the size of the input graphs and \(h\), up to an additive error of \(\varepsilon\) in the case of \(d_{\mathsf{W},\infty}\) and \(\delta_{\mathsf{W},\infty}\). MPNNs are Lipschitz in the metrics we defined, where the Lipschitz constant only depends on basic properties of the MPNN model. That is, if two graphons are close in our metrics, then MPNNs outputs for _all_ MPNN models up to a specific Lipschitz constant are close. Formally, let \(\boldsymbol{\varphi}=(\varphi_{i})_{i=0}^{L}\) be an MPNN model with \(L\) layers, and for \(t\in\{0,\ldots,L\}\), let \(\boldsymbol{\varphi}_{t}\coloneqq(\varphi_{i})_{i=0}^{t}\). Then, we inductively define the _Lipschitz constant_\(C_{\boldsymbol{\varphi}}\geq 0\) of \(\boldsymbol{\varphi}\) by \(C_{\boldsymbol{\varphi}_{0}}\coloneqq 0\) for \(t=0\) and \(C_{\boldsymbol{\varphi}_{t}}\coloneqq\|\varphi_{t}\|_{L}\cdot(\|\mathbf{h}_{- }^{(t-1)}\|_{\infty}+C_{\boldsymbol{\varphi}_{t-1}})\) for \(t>0\). This essentially depends on the product of the Lipschitz constants of the functions in \(\boldsymbol{\varphi}\), and the bounds for the MPNN output values, which are finite since a continuous function on a compact set attains its maximum. Including these bounds in the constant is necessary since we consider sum aggregation: a constant function mapping all inputs to some \(c\in\mathbb{R}\) has Lipschitz constant zero, but when integrated with measures of total mass zero and one, for example, the difference of the outputs is \(c\). We define \(C_{(\boldsymbol{\varphi},\psi)}\) for Lipschitz \(\psi\) analogously by essentially viewing \((\boldsymbol{\varphi},\psi)\) as an MPNN model. **Lemma 3**.: Let \(\boldsymbol{\varphi}\) be an \(L\)-layer MPNN model for \(L\in\mathbb{N}\) and \(\psi\) be Lipschitz. Then, \[\|\mathbf{h}_{\alpha}^{(L)}-\mathbf{h}_{\beta}^{(L)}\|_{2}\leq C_{\boldsymbol{ \varphi}}\cdot d_{\mathsf{W},L}(\alpha,\beta)\quad\quad\text{and}\quad\quad\| \mathbf{h}_{\mu}-\mathbf{h}_{\nu}\|_{2}\leq C_{(\boldsymbol{\varphi},\psi)} \cdot\delta_{\mathsf{W},L}(\mu,\nu)\] for all \(\alpha,\beta\in\mathbb{M}_{L}\) and all \(\mu,\nu\in\mathscr{P}(\mathbb{M}_{L})\), respectively. These inequalities also hold for \(d_{\mathsf{W},\infty}\) and \(\delta_{\mathsf{W},\infty}\) with an additional factor of \(L\) in the Lipschitz constant. ## 4 Universality of message-passing graph neural networks In this section, we prove a universal approximation theorem for MPNNs on IDMs and DIDMs, deriving our main result from it. For \(0\leq L\leq\infty\), let \(\mathcal{N}_{L}^{n}\subseteq C(\mathbb{M}_{L},\mathbb{R}^{n})\), denote the set of all functions \(\mathbf{h}_{-}^{(L)}\colon\mathbb{M}_{L}\to\mathbb{R}^{n}\) for an \(L\)-layer MPNN model \(\boldsymbol{\varphi}\) with \(d_{L}=n\). Similarly, let \[\mathcal{N}\!\mathcal{N}_{L}^{n}\coloneqq\{\mathbf{h}_{-}\mid\boldsymbol{ \varphi}\ L\text{-layer MPNN model, }\psi\colon\mathbb{R}^{d_{L}}\to\mathbb{R}^{n}\text{ Lipschitz}\} \subseteq C(\mathscr{P}(\mathbb{M}_{L}),\mathbb{R}^{n})\] be the set of all functions computed by an MPNN after a global readout. Our universal approximation theorem, Theorem 4, shows that all continuous functions on IDMs and DIDMs, i.e., functions on graphons that are invariant w.r.t. the (colors of) the \(1\)-WL test, can be approximated by MPNNs. Hence, our result extends the universal approximation result of Chen et al. [26] for _measure Markov chains_ in two ways. First, measure Markov chains are restricted to finite spaces by definition, which is not the case for graphons and our universal approximation theorem. Secondly, the spaces \(\mathbb{M}_{L}\) and \(\mathscr{P}(\mathbb{M}_{L})\) are compact, which means we obtain a universal approximation theorem for the whole space of graphons, including all graphs, not restricted to an artificially chosen compact subset. **Theorem 4**.: Let \(0\leq L\leq\infty\). Then, \(\mathcal{N}_{L}^{1}\) is dense in \(C(\mathbb{M}_{L},\mathbb{R})\) and \(\mathcal{N}\!\mathcal{N}_{L}^{1}\) is dense in \(C(\mathscr{P}(\mathbb{M}_{L}),\mathbb{R})\). The proof of Theorem 4 is elegant and does not rely on encoding the \(1\)-WL test in an MPNNs. That is, it follows by inductive applications of the Stone-Weierstrass theorem [35, Theorem 2.4.11] combined with the definition of IDMs. It is strikingly similar to the proof of Grebik and Rocha [52] for a similar result concerning tree homomorphism densities; see Appendix D. While the second statement of Theorem 4, i.e., the graphon-level approximation, is interesting in its own right, the crux of Theorem 4 lies in its first statement, namely, that \(\mathcal{N}_{L}^{1}\) is dense in \(C(\mathbb{M}_{L},\mathbb{R})\), immediately implying that the topology induced by MPNNs on \(\mathscr{P}(\mathbb{M}_{L})\) is the weak topology, i.e., the topology we endowed this space within Section 2. **Corollary 5**.: Let \(0\leq L\leq\infty\) and \(n>0\). Let \(\nu\in\mathscr{P}(\mathbb{M}_{L})\) and \((\nu_{i})_{i}\) be a sequence with \(\nu_{i}\in\mathscr{P}(\mathbb{M}_{L})\). Then, \(\nu_{i}\to\nu\) if and only if \(\mathbf{h}_{\nu_{i}}\to\mathbf{h}_{\nu}\) for all \(L\)-layer MPNN models \(\boldsymbol{\varphi}\) and Lipschitz \(\psi\colon\mathbb{R}^{d_{L}}\to\mathbb{R}^{n}\). By combining standard compactness arguments with Theorem 2 and Corollary 5, we can now prove that two graphons are close in our metrics if and only if the output of all possible MPNNs, up to a specific constant and number of layers, is close. Formally, the forward direction of this equivalence is just Lemma 3, while the backward direction reads as follows. **Theorem 6**.: Let \(n>0\) be fixed. For every \(\varepsilon>0\), there are \(L\in\mathbb{N},C>0\), and \(\delta>0\) such that, for all graphons \(U\) and \(W\), if \(\|\mathbf{h}_{U}-\mathbf{h}_{W}\|_{2}\leq\delta\) for every \(L^{\prime}\)-layer MPNN model \(\boldsymbol{\varphi}\) and Lipschitz \(\psi\colon\mathbb{R}^{d_{L^{\prime}}}\to\mathbb{R}^{n}\) with \(L^{\prime}\leq L\) and \(C_{(\boldsymbol{\varphi},\psi)}\leq C\), then \(\delta_{\mathsf{P}}(U,W)\leq\varepsilon\). We stress that the constants \(L\), \(C\), and \(\delta\) in the theorem statement are independent of the graphons \(U\) and \(W\). The proof of Theorem 6 is simple and elegant. That is, one assumes that the statement does not hold to obtain two sequences of counterexamples, which have to have convergent subsequences by compactness, and the limit graphons allow us to derive a contradiction. The other equivalences of Theorem 1 can be formalized as similar but simpler epsilon-delta-statements. They follow by the same reasoning together with the universality of tree homomorphism densities [52] and the result of Boker [18]. See Appendix C.4 for the formal statements and their complete proofs. ## 5 Experimental evaluation In the following, we investigate the applicability of our theory on real-world prediction tasks. Specifically, we answer the following questions. **Q1**: To what extent do our graph metrics \(\delta_{\mathsf{P}},\delta_{\mathsf{W}}\), act as a proxy for distances between MPNNs' vectorial representations? **Q2**: How do untrained MPNNs compare to their trained counterparts in terms of predictive performance? The source code of all methods and evaluation protocols are available at [https://github.com/nhuang37/finegrain_expressivity_GNN](https://github.com/nhuang37/finegrain_expressivity_GNN). We conducted all experiments on a server with 256 GB RAM and four NVIDIA RTX A5000 GPU cards. Fine-grained expressivity comparisons of MPNNs.To answer Q1, we construct a graph sequence that converges in our graph metrics. Given an MPNN, we compute the sequence of its embeddings on such graphs and the corresponding embedding distance using the \(\ell_{2}\)-norm. Hence, comparing different MPNNs amounts to comparing the convergence rate of their Euclidean embedding distances concerning the graph distances. Concretely, we simulate a sequence of 50 random graphs \(\{G_{i}\}\) for \(i\in[50]\) with 30 vertices using the stochastic block model, where \(G_{i}\sim\operatorname{SBM}(p,q_{i})\), with \(p=0.5\) and \(q_{i}\in[0.1,0.5]\) increases equidistantly. Let \(G\) denote the last graph in the sequence, and observe that \(G\) is sampled from an Erdos-Renyi model, i.e., \(G\sim\operatorname{ER}(p)\). For \(i\in[50]\), we compute the Wasserstein distance \(\delta_{\mathsf{W},h}(G_{i},G)\) and the Euclidean distance \(\|\mathbf{h}_{G_{i}}-\mathbf{h}_{G}\|_{2}\).1 For demonstration purposes, we compare two common MPNN layers, GIN [115] and GraphConv [89], using sum aggregation normalized by the graph's order, varying the hidden dimensions and the number of layers. Figure 2 visualizes their normalized embedding distance and normalized graph distance, with an increasing number of hidden dimensions, from left to right. GIN, top-row, and GraphConv, bottom-row, produce more discriminative embeddings as the number of hidden dimensions increases, supporting Corollary 4. We observe similar behavior when increasing the number of layers; see Figure 3 in the appendix. Untrained GraphConv embeddings are more robust than untrained GIN embeddings to the choice of hidden dimensions and number of layers. In Figure 4 in the appendix shows the same experiments on a real-world dataset, Mutag, part of the TUDataset [90]. We observe that increasing the number of hidden dimensions improves performance. Nonetheless, increasing the number of layers seems to first improve and then degrade performance. This observation coincides with the downstream graph classification performance, as discussed in the next section. Footnote 1: We compute \(\delta_{\mathsf{W},h}\) via a min-cost-flow algorithm and terminate at most 3 iterations after the WL colors stabilize. The surprising effectiveness of untrained MPNNs.To answer Q2, we compare popular MPNN architectures, i.e., GIN and GraphConv, with their untrained counterparts. For untrained MPNNs, we freeze their input and hidden layers weights that are randomly initialized and only optimize for the output layer(s) used for the final prediction. We benchmark on a subset of the established TUDataset [90]. For each dataset, we run _paired_ experiments of trained and untrained MPNNs on the same ten random splits (train/test) and 10-fold cross-validation splits, using the evaluation protocol outlined in Morris et al. [90]. We report the test accuracy with 10-run standard deviation in Table 1 and the mean training time per epoch with standard deviation in Table 2 in the appendix. Table 1 and Table 2 show that untrained MPNNs perform competitively as trained MPNNs while significantly faster, with 20%-46% time savings. We also investigate the effect of the hidden dimension, i.e., the number of MPNN models in Theorem 6 and the number of layers, i.e., the 1-WL iteration in IDM. Figure 6 in the appendix shows that increasing the hidden dimension improves the performance of untrained MPNNs, while the effect of the number of layers is more nuanced. As shown in Figure 7 in the appendix, increasing the number of layers first improves and then degrades untrained MPNNs' performance, which is likely due to the changes in MPNNs' ability to preserve graph distance observed as in Figure 5 in the appendix. ## 6 Conclusion This work devises a deeper understanding of MPNNs' capacity to capture graph structure, precisely determining when they learn similar vectorial representations. To that, we developed a comprehensive theory of graph metrics on graphons, demonstrating that two graphons are close in our metrics if, and only if, the outputs of all possible MPNNs are close, offering a more nuanced understanding of their ability to capture graph structure similarity. In addition, we established a connection between the continuous extensions of 1-WL and MPNNs to graphons, tree distance, and tree homomorphism densities. Our experimental study confirmed the applicability of our theory in real-world prediction tasks. _In summary, our work establishes the first rigorous connection between the similarity of graphs and their learned vectorial presentations, paving the way for a more nuanced understanding of MPNNs' expressivity and robustness abilities and their connection to \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Accuracy**\(\uparrow\) & Mutag & Imdb-Binary & Imdb-Multi & NCI1 & Proteins & Reddit-Binary \\ \hline GIN-m (trained) & 79.01 \(\pm\) 2.24 & 69.96 \(\pm\) 1.43 & 46.29 \(\pm\) 0.76 & **78.61 \(\pm\) 0.34** & **73.51 \(\pm\) 0.47** & **89.73 \(\pm\) 0.37** \\ GIN-m (untrained) & **82.56 \(\pm\) 3.12** & **70.70 \(\pm\) 0.60** & **47.59 \(\pm\) 0.96** & 77.82 \(\pm\) 0.55 & 73.45 \(\pm\) 0.30 & 82.32 \(\pm\) 0.45 \\ \hline GraphConv-m (trained) & **81.62 \(\pm\) 2.08** & 59.14 \(\pm\) 1.93 & 38.75 \(\pm\) 1.62 & **63.28 \(\pm\) 0.6** & 71.49 \(\pm\) 0.67 & **82.4 \(\pm\) 0.19** \\ GraphConv-m (untrained) & 78.03 \(\pm\) 1.57 & **65.77 \(\pm\) 1.32** & **43.29 \(\pm\) 0.96** & 62.36 \(\pm\) 0.45 & **71.83 \(\pm\) 0.42** & 77.15 \(\pm\) 0.29 \\ \hline \hline \end{tabular} \end{table} Table 1: Untrained MPNNs show competitive performance as trained MPNNs given sufficiently large hidden dimensionality (3-layer, 512-hidden-dimension). To be consistent with our theory, we use standard architectures with sum aggregation, layer-wise \(1/V(G)\) normalization, and mean pooling, denoted by “MPNN-m.” We report the mean accuracy \(\pm\) std over ten data splits. graph structure._ Looking forward, future research could focus on extending our theory to labeled graphs and graphons with labels from some compact metric space, namely, graphs and graphons with signals. This presents a challenge, as tree distance and tree homomorphism densities do not readily generalize to labeled graphons. Additionally, further quantitative versions of equivalences in Theorem 1, not resorting to epsilon-variants statements, and generalizing our results to the \(k\)-\(\mathsf{WL}\) are interesting avenues for future exploration. ## Acknowledgements This research project was started at the BIRS 2022 Workshop "Deep Exploration of non-Euclidean Data with Geometric and Topological Representation Learning" held at the UBC Okanagan campus in Kelowna, B.C. Jan Boker is funded by the European Union (ERC, SymSim, 101054974). Views and opinions expressed are, however those of the author only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. Ningyuan Huang is partially supported by the MINDS Data Science Fellowship from Johns Hopkins University. Soledad Villar is partially funded by the NSF-Simons Research Collaboration on the Mathematical and Scientific Figure 2: MPNNs preserve graph distance better when increasing the number of hidden dimensions. Comparatively, untrained GIN embeddings are more sensitive than untrained GraphConv to changes in the number of hidden dimensions. Foundations of Deep Learning (MoDL) (NSF DMS 2031985), NSF CISE 2212457, ONR N00014-22-1-2126 and an Amazon AI2AI Faculty Research Award. Christopher Morris is partially funded by a DFG Emmy Noether grant (468502433) and RWTH Junior Principal Investigator Fellowship under Germany's Excellence Strategy.
2310.14416
ConViViT -- A Deep Neural Network Combining Convolutions and Factorized Self-Attention for Human Activity Recognition
The Transformer architecture has gained significant popularity in computer vision tasks due to its capacity to generalize and capture long-range dependencies. This characteristic makes it well-suited for generating spatiotemporal tokens from videos. On the other hand, convolutions serve as the fundamental backbone for processing images and videos, as they efficiently aggregate information within small local neighborhoods to create spatial tokens that describe the spatial dimension of a video. While both CNN-based architectures and pure transformer architectures are extensively studied and utilized by researchers, the effective combination of these two backbones has not received comparable attention in the field of activity recognition. In this research, we propose a novel approach that leverages the strengths of both CNNs and Transformers in an hybrid architecture for performing activity recognition using RGB videos. Specifically, we suggest employing a CNN network to enhance the video representation by generating a 128-channel video that effectively separates the human performing the activity from the background. Subsequently, the output of the CNN module is fed into a transformer to extract spatiotemporal tokens, which are then used for classification purposes. Our architecture has achieved new SOTA results with 90.05 \%, 99.6\%, and 95.09\% on HMDB51, UCF101, and ETRI-Activity3D respectively.
Rachid Reda Dokkar, Faten Chaieb, Hassen Drira, Arezki Aberkane
2023-10-22T21:13:43Z
http://arxiv.org/abs/2310.14416v1
ConViViT - A Deep Neural Network Combining Convolutions and Factorized Self-Attention for Human Activity Recognition ###### Abstract The Transformer architecture has gained significant popularity in computer vision tasks due to its capacity to generalize and capture long-range dependencies. This characteristic makes it well-suited for generating spatiotemporal tokens from videos. On the other hand, convolutions serve as the fundamental backbone for processing images and videos, as they efficiently aggregate information within small local neighborhoods to create spatial tokens that describe the spatial dimension of a video. While both CNN-based architectures and pure transformer architectures are extensively studied and utilized by researchers, the effective combination of these two backbones has not received comparable attention in the field of activity recognition. In this research, we propose a novel approach that leverages the strengths of both CNNs and Transformers in an hybrid architecture for performing activity recognition using RGB videos. Specifically, we suggest employing a CNN network to enhance the video representation by generating a 128-channel video that effectively separates the human performing the activity from the background. Subsequently, the output of the CNN module is fed into a transformer to extract spatiotemporal tokens, which are then used for classification purposes. Our architecture has achieved new SOTA results with 90.05 %, 99.6%, and 95.09% on HMDB51, UCF101, and ETRI-Activity3D respectively. Activity Recognition, Transformer, CNN ## I Introduction Activity recognition can be defined as allowing the machine to recognize/detect the activity based on information received from different sensors. These sensors can be cameras, wearable sensors, and sensors attached to objects of daily use or deployed in the environment. In this work, we are interested in activity Recognition using RGB videos. RGB videos are a complex type of data, that contain complex spatiotemporal dependency. To extract spatial features, we utilized a CNN module inspired by [1]. The primary objective of this CNN module is to enhance the representation of the video by extracting spatial tokens. The advantages of applying CNNs to video data have been extensively discussed in [1]. It has been affirmed that by applying small filters to localized neighborhoods of pixels, CNNs are capable of extracting fine-grained spatial tokens. Furthermore, it has been demonstrated that CNNs outperform self-attention mechanisms in terms of spatial token extraction. Moreover, by employing CNNs as a first step, the subsequent transformer module is able to perform self-attention on a reduced spatial set of tokens and extract temporal dependencies. To extract temporal features, we have used a video transformer architecture inspired by [2]. Within this framework, we have investigated two distinct architectures, each utilizing a different type of self-attention: Factorised Dot-Product and Factorised Self-attention. Both of these attention mechanisms apply self-attention to both spatial and temporal axes. Factorized Self-attention first applies spatial attention, followed by temporal attention on the output of the spatial attention. In contrast, the Dot-Product attention applies spatial and temporal attention to the input and subsequently fuses the results. We have proved through our experiments that factorized self-attention yields better results compared to other approaches. The proposed architecture has achieved state-of-the-art performance on three benchmark datasets. We conducted tests on HMDB51 [3], UCF101 [4] and ETRI-Activity3D [5] and obtained state-of-the-art results on HMDB51, UCF101, and ETRI-Activity3D with 90.05 %, 99.6%, and 95.09% respectively. ## II Related Works The existing literature on activity recognition and computer vision can be broadly categorized into three main groups: (1) CNN-based approaches, (2) Transformer-based approaches and (3) Hybrid architectures combining CNNs and Transformers. ### _CNN-based approaches_ 3D Convolutional Neural Networks (CNNs) have traditionally been the primary choice for visual data processing, encompassing various types of visual data such as images and videos. Consequently, they have held a dominant position in the field of computer vision for a considerable time. However, with the adaptation of attention mechanisms and the transformer architecture from Natural Language Processing (NLP) to Computer Vision (CV), the landscape has witnessed significant changes [6]. Previous studies have attempted to address the challenges of activity recognition using purely 3D and 2D Convolutional Neural Networks [7, 8]. However, optimizing the results and achieving satisfactory performance with a pure CNN architecture for activity recognition has proven to be challenging, primarily due to the high computational demands associated with these architectures. To overcome these challenges, the I3D approach [9] introduced the concept of inflating pre-trained 2D convolution kernels, which allowed for better optimization of the network. In addition, other prior works focused on factorizing 3D convolution kernels in various dimensions to reduce computational complexity [10, 11, 12]. More recent studies have proposed techniques to enhance the temporal modeling ability of 2D CNNs [13, 14]. However, due to the inherent nature of CNNs, which aggregate information within a small window of the neighborhood, these approaches did not achieve significant improvements in performance. Taken together, these prior works have explored different strategies to address the challenges of activity recognition using CNN architectures. While attempts have been made to optimize and enhance the performance of pure CNNs, limitations related to computational requirements and the inherent nature of CNNs' spatial aggregation persist. ### _Transformer-based approaches_ Since the introduction of the vision transformer [6], numerous studies have embraced the transformer architecture for computer vision tasks. These works have consistently surpassed the results achieved by CNNs. This is due to the transformers' ability to capture long-range dependencies and effectively attend to important regions of the input through self-attention mechanisms. Several notable works have contributed to the adoption and advancement of the transformer architecture in computer vision. These include works such as [15, 16, 17, 18], which propose various variants for spatiotemporal learning in video analysis. These variants aim to harness the power of transformers in capturing both spatial and temporal information for more comprehensive video understanding. Video Vision Transformers (Timesformer [19] and ViViT [2]) are among the early Transformers approaches for action recognition. They introduce innovative embedding schemes and adaptations to ViT [17] and other related Transformers for modeling video clips. In [19], the authors propose a tokenization scheme called uniform frame sampling based on a randomly selected frames from a video. Along similar lines, ViViT [2] introduced Tubelets Embedding to effectively preserve contextual time data within videos and handles 3D volumes instead of frames. Four different variants were proposed based on the attention technique: Spatiotemporal attention, Factorized Encoder, Factorized self-attention, and Factorized dot-product attention. Simultaneously, other research efforts have focused on mitigating the computational cost associated with transformer architectures while still achieving impressive results. An example of such work is the Swin Transformer [20], which presents an innovative architecture designed to strike a balance between computational efficiency and powerful performance. Collectively, these works have significantly propelled the adoption of transformer architectures in computer vision. By capitalizing on their ability to capture long-range dependencies and leverage self-attention mechanisms, these architectures have demonstrated remarkable capabilities in various visual tasks. The self-attention mechanism is considered inefficient when it comes to encoding low-level features. To address this limitation, the Swin Transformer approach introduces a solution by applying attention within a local 3D window. This localized attention mechanism allows for more efficient encoding of low-level features. ### _Hybrid approaches_ In recent research, efforts have been made to incorporate convolutional neural networks into the transformer architecture for image recognition tasks. However, these approaches have not adequately addressed the spatiotemporal aspect of videos. Recognizing this limitation, Uniformer [1] presented an architecture specifically tailored for video understanding, utilizing a concise transformer format based on 3D convolutions that unifies convolutions and transformers. By integrating convolutional operations into the transformer architecture, Uniformer aims to combine the strengths of both convolutional neural networks and transformers, resulting in an improved framework for feature encoding. In this work, we adopt 3D convolutions to address the inefficiency of self-attention in encoding low-level features. In fact, 3D convolutions enhance the capability of our model to capture both spatial and temporal information effectively. ## III Proposed Method ### _Architecture overview_ The proposed architecture consists of two main modules, as depicted in Figure 1: a CNN module to extract spatial features followed by a transformer module. The CNN module plays a crucial role in capturing spatial information from the input data. It applies convolutional operations to extract relevant features that describe the spatial characteristics of the input images. The primary objective of this CNN module is to enhance the representation of the video by extracting spatial tokens. Its output is later fed to the transformer module. The transformer module is the factorized self-attention transformer proposed in [2]. It takes advantage of its self-attention mechanism to capture long-range dependencies and model the interactions between spatial features. This module leverages the encoded spatial information from the CNN Module to extract spatiotemporal features that are vital for accurate action classification. Inspired by the contributions of these two works, we propose an hybrid approach that combines the strengths of CNNs for extracting spatial cues and the transformer architecture for extracting spatiotemporal tokens. Our CNN module aims to enhance the video representation by transforming it from a three-channel video to a 128-channel video. Subsequently, the transformer takes the output of the CNN module and applies a Patch Embedding similarly to [1]. Then a factorized self-attention is applied generating a rich spatiotemporal representation that will be used by the classification head. ### _Patch Embedding_ The main purpose of Patch Embedding is to provide the order information to the transformer by slicing the input into \(16\times 16\) patches. Our proposed patch embedding block is inspired by the design and implementation of Uniformer's Dynamic Position Embedding (DPE) architecture [1] since the use of DPE improves the state-of-the-art results by \(0.5\%\) and \(1.7\%\) on ImageNet and Kinetics-400, respectively. This shows that by encoding the position information, DPE can maintain the spatiotemporal order, thus contributing to better learning of the spatiotemporal representation [1]. The main contribution here is the use of a CNN layer with \(0\) padding to create a more adequate representation. The fact that the transformer does not take video as input makes the application of a CNN layer more advantageous because this layer allows to manipulate and adjust how we introduce the order information to obtain better results. ### _CNN Block_ The proposed CNN block aims to extract spatial information and offers a compact spatial representation to be fed to the transformer block. It is based on 3D-CNNs and Depth-Wise CNNs architectures (see figure 1) as follows: * **DW 3D-CNN \(3\times 3\times 3\)**: the depth-wise 3D-CNN aims to extract the spatial features of each 3D neighborhood (\(3\times 3\times 3\)). It consists in applying a single convolutional filter for each input channel. This allows to better extract spatial features. * **3D-CNN \(1\times 1\times 1\)**: the 3D-CNN (\(1\times 1\times 1\)) aims to reduce the dimension of the input before applying the (\(5\times 5\times 5\)) filter to save computation time. * **3D-CNN \(5\times 5\times 5\)**: The application of a 3D filter of size 5 (larger than 3) allows to have a more global representation of the spatial neighborhood. * **3D-CNN \(1\times 1\times 1\)**: the 3D-CNN (\(1\times 1\times 1\)) is intended to increase the size of the input to give more information to the following steps. ### _Spatiotemporal Transformer Module_ The transformer block is the most important part of the proposed architecture. It takes the output of the CNN Module (spatial representation) in order to create a spatiotemporal representation. The application of attention allows any transformer to focus on the most important parts of the input as well as to understand the long sequences which allows extracting the temporal dependencies from a spatial sequence. To extract temporal features, two types of self-attention could be chosen, the factorized dot product and the factorized self-attention proposed in [2]. Both apply attention to the spatial and temporal axes. Factorised self-attention _;_ It consists in applying attention on the spatial axis of the input followed by temporal attention. The application of attention on the spatial axis first allows us to take into consideration the dependencies between Fig. 1: Overall proposed architecture for Human Activity Recognition the spatial tokens and to deduce the most important parts that characterize this axis, thus preparing the input for the next operation of self-attention. The result of the self-attention on the spatial axis will be the input for the self-attention on the time axis that has the goal of the extraction of the spatiotemporal features. Factorized Dot-Product attentionFactorized self-attention is a way of applying self-attention. The idea is to apply attention on the spatial axis of the input X and then have an output Y that we will apply attention on its temporal axis. The application of attention on the spatial axis first allows to take into consideration the dependencies between the spatial tokens and to deduce the most important parts that characterize this axis, thus preparing the input for the next operation of self-attention. The result of the self-attention on the spatial axis will be the input for the self-attention on the time axis that has for the goal of the extraction of the spatiotemporal features. Although both variants are interesting, the factorized attention seems to be more suitable for our architecture (non-video inputs). ## IV Experiments ### _Datasets_ * **ETRI-3D**[5]: ETRI-3D is the first large-scale RGB-D dataset of the daily activity of the elderly (\(112\ 620\) samples). ETRI-3D is collected by Kinect v2 sensors and consists of three synchronized data modalities: RGB video, depth maps, and skeleton sequences. To film the visual data, 50 elderly subjects are recruited. The elderly subjects are in a wide age range from 64 to 88 years old, which leads to a realistic intra-class variation of actions. In addition, they acquired a dataset for 50 young people in their twenties in the same manner as the elderly. * **UCF101**[4]: UCF101 is an action recognition dataset of realistic action videos collected from Youtube with 101 action categories. With 13320 videos of 101 action categories, UCF 101 offers a wide variety of actions with the presence of large variations in camera movement, object appearance and pose, object scale, viewpoint, cluttered background, lighting conditions, etc. * **HMDB51**[3]: The HMDB51 dataset is a large collection of realistic videos from a variety of sources, including movies and web videos. The dataset consists of 6,849 video clips from 51 action categories (such as "jump" and "laugh"), with each category containing at least 101 clips. The original evaluation scheme uses three different training/test divisions. Within each division, each action class has 70 clips for training and 30 clips for testing. ### _Visualization of ConViViT shallow and Deep Layers outputs_ Figure 2 is a visualization of the effect of our spatial and spatiotemporal block in the shallow and deep layers. We observe that the output of the first CNN block allows to localize the person (see figure 2(a)). Then the output of the next 3D-CNN block which is the output of the spatial module gives more importance to the person who makes the action (see figure 2(b)). Figure 2(c) shows five attention maps of ConViViT computed for a sequence of five images showing an action from HMDB51 dataset. Each attention map refers to the visualizations of the attention weights computed between each patch in the image. The last attention map which refers to the last frame of the action sequence shows that our model has succeeded to capture the entire trajectory of the action (red zones). ### _Ablation study_ In this section, we will investigate the influence of different modules of the proposed architecture on the overall performance. All experiments in this study are conducted on the HMDB51 dataset. Mainly we focus on the usefulness of using a CNN block to extract spatial features as well as the usefulness of factorized attention. As a reminder, the factorized attention block is inspired by the Vivit [2] architecture and the CNN block by Uniformer [1]. In order to study the impact of CNN and factorized attention, we compare the variants of the proposed architecture with Uniformer and Vivit. So, the Vivit architecture is based only on the transformer which applies the factorized attention directly to the input. The Uniformer consists of a hybrid architecture based on the general formula of attention introduced in the first vision transformer [6]. #### Iv-C1 Factorized Attention vs Factorized Dot Product Attention We compare the two variants of self-attention: The factorized self-attention used in our architecture and the factorized dot product. As illustrated in Table I (first two rows), the factorized attention outperforms the factorized dot-product architecture. #### Iv-C2 Impact of the spatial block To illustrate the importance of the proposed spatial block, we compare our results with those obtained by Vivit [2]. The table I (first and third rows) shows that the proposed architecture exceeds Vivit by \(9.87\%\). We opted for a hybrid architecture of CNN and transform it to prepare a spatial representation before applying any type of attention. The result of our test with dot-product attention supports our hypothesis regarding the use of CNN before transformers in computer vision. In order to study the impact of the number of the required CNN blocks in the CNN Module, we compare the results obtained by our architecture based on a single CNN Block with two CNN Blocks. We notice that using two CNN Blocks yields to \(90.05\%\) accuracy versus \(64.47\%\) using one CNN block (see Table I - first and fourth rows). #### Iv-C3 Spatio-temporal block To prove the value of our spatio-temporal block we can compare our results with the results of Uniformer [1] since they opted for a hybrid architecture too but they used the general formula of attention introduced in the first vision transformer [6]. This experiment reveals an improvement of accuracy by \(4.69\%\) when comparing our architecture (\(90.05\%\)) to first vision transformer [6] (\(85.36\%\)) (see Table I - First and fifth rows). Actually, applying normal attention to the output of CNN gives good results but our choice to apply factorized attention is better because applying attention on an axis allows the model to see the dependencies between the elements on that axis. Thus, by applying attention to the spatial and then temporal axis we obtained results that exceeded that of Uniformer. ### _Comparison with state-of-the-art_ The table II represents a comparison between our architecture and the state-of-the-art architectures on HMDB51, we added the results of Uniformer and Vivit since they were not tested on HMDB51. Our architecture improved the previous state of the art by \(2.49\%\) and achieved a new SOTA result of \(90.05\%\). The table III represents a comparison between our architecture and seven other state-of-the-art architectures tested on UCF101. The results of Uniformer and Vivit have not been included because they were not tested on UCF101. Our architecture improves the previous state of the art by 0.22% and achieved a new SOTA result of \(99.6\%\). The table IV represents a comparison between our architecture and the FSA-CNN [5] architecture. FSA-CNN is a CNN architecture that takes as input videos (RGB) and/or skeleton data. It is based on a deep CNN network and an innovative approach to replace the activation functions. Our model outperforms the FSA-CNN [5] RGB model by more than \(4.9\%\) and FSA-CNN RGB+S by \(1.39\%\). Although the authors in [5] claim to include spatiotemporal variation in Fig. 2: Visualization of spatial and spatiotemporal Transformer modules outputs on a clip from HMDB51 dataset action data, a CNN network that takes RGB video and skeleton data as input is not able to extract a complete spatiotemporal representation due to the nature of CNN and the complexity of the data (2 modalities of very different types). Our architecture outperforms FSA-CNN thanks to the proposed transformer that supports the extraction of spatiotemporal dependencies. ## Conclusion and perspectives In this paper, we propose a novel sequential combination of 3D CNN convolution and a spatiotemporal transformer. Experiments show that the proposed architecture achieves new SOTA results with \(90.05\%\), \(99.6\%\), and \(95.09\%\) on HMDB51, UCF101, and ETRI-Activity3D datasets respectively. In future work, we will investigate further schemes for combining CNN architectures with transformers.
2306.09389
ST-PINN: A Self-Training Physics-Informed Neural Network for Partial Differential Equations
Partial differential equations (PDEs) are an essential computational kernel in physics and engineering. With the advance of deep learning, physics-informed neural networks (PINNs), as a mesh-free method, have shown great potential for fast PDE solving in various applications. To address the issue of low accuracy and convergence problems of existing PINNs, we propose a self-training physics-informed neural network, ST-PINN. Specifically, ST-PINN introduces a pseudo label based self-learning algorithm during training. It employs governing equation as the pseudo-labeled evaluation index and selects the highest confidence examples from the sample points to attach the pseudo labels. To our best knowledge, we are the first to incorporate a self-training mechanism into physics-informed learning. We conduct experiments on five PDE problems in different fields and scenarios. The results demonstrate that the proposed method allows the network to learn more physical information and benefit convergence. The ST-PINN outperforms existing physics-informed neural network methods and improves the accuracy by a factor of 1.33x-2.54x. The code of ST-PINN is available at GitHub: https://github.com/junjun-yan/ST-PINN.
Junjun Yan, Xinhai Chen, Zhichao Wang, Enqiang Zhoui, Jie Liu
2023-06-15T15:49:13Z
http://arxiv.org/abs/2306.09389v1
# ST-PINN: A Self-Training Physics-Informed Neural Network for Partial Differential Equations ###### Abstract Partial differential equations (PDEs) are an essential computational kernel in physics and engineering. With the advance of deep learning, physics-informed neural networks (PINNs), as a mesh-free method, have shown great potential for fast PDE solving in various applications. To address the issue of low accuracy and convergence problems of existing PINNs, we propose a self-training physics-informed neural network, ST-PINN. Specifically, ST-PINN introduces a pseudo label based self-learning algorithm during training. It employs governing equation as the pseudo-labeled evaluation index and selects the highest confidence examples from the sample points to attach the pseudo labels. To our best knowledge, we are the first to incorporate a self-training mechanism into physics-informed learning. We conduct experiments on five PDE problems in different fields and scenarios. The results demonstrate that the proposed method allows the network to learn more physical information and benefit convergence. The ST-PINN outperforms existing physics-informed neural network methods and improves the accuracy by a factor of 1.33x-2.54x. The code of ST-PINN is available at GitHub: [https://github.com/junjun-yan/ST-PINN](https://github.com/junjun-yan/ST-PINN). partial differential equations, physics-informed neural networks, pseudo label, self-training ## I Introduction Partial differential equations (PDEs) are crucial in physics and engineering, such as computational fluid dynamics, electromagnetic theory, and quantum mechanics [1, 2, 3]. However, the numerical methods are sometimes computationally expensive in inverse problem solving, complex geometry domain solving, and high-dimension space solving [4]. With the emergence of deep learning, some data-driven models have successfully been applied in the physics and engineering fields [5, 6, 7]. Despite the ability of data-driven models to establish a function map between input and output data, their accuracy is closely related to the size and distribution of the data. Furthermore, the data-driven models overlook the prior physical knowledge of physics and engineering problems, which is a waste of information. Many researchers have addressed the above issue by combining prior physics knowledge with data-driven models. This approach, known as physics-informed learning [4, 5], has led to the development of physics-informed neural networks (PINNs), which embed PDEs into the loss function and convert numerical computation problems into optimization problems [8, 9]. PINNs require only a small amount of (or even no) supervised data to train. Moreover, PINNs are mesh-free, which means they can randomly sample points in the domain as unlabeled training data without generating mesh. While theoretical analyses suggest that PINNs can converge to solutions in some situations, their accuracy is still inadequate in practical applications [10, 11, 12, 13]. Semi-Supervised Learning (SSL) is a machine learning method between data-driven supervised learning and unsupervised learning. It is suited for scenarios where only a small amount of labeled data is available but a large amount of unlabeled data [14, 15, 16]. SSL methods make full use of the unlabeled data to improve the accuracy of the models. One of the most widely used SSL methods is self-training, which assigns pseudo labels to the most confident unlabeled samples and includes these pseudo labels in supervised training [17, 18]. However, the most challenging part of this process is selecting the unlabeled samples to assign pseudo labels. Although self-training has successfully been applied in many fields, there have been few studies in the context of physics-informed learning [14, 16]. In this paper, we proposed ST-PINN, which combines self-training with physics-informed neural networks (PINNs) to leverage unlabeled data and physical information. Our work is motivated by two key factors. Firstly, in many practical applications, obtaining the full range of physics data in the training domain can be expensive. Thus, only a few observed data points are available, which aligns with the SSL scenario. Secondly, the residual loss of the physics equation is a natural criterion for selecting pseudo points since predicted values that conform to the equation are likely to be more accurate. Specifically, we begin by warming up the network with traditional training steps. After several iterations, the network predicts all the sample points and computes the residual loss of the physics equation. Then, ST-PINN selects some sample points with the minimum equation loss and assigns them pseudo labels. In the next iteration, the pseudo points will be incorporated as supervised data into training. At the end of every several iterations, we generate new pseudo points using the above processes. With an appropriate pseudo-generating strategy, the pseudo points can be considered an extension of supervised data, which benefits network convergence. In general, our contributions can be summarized as follows: * We propose ST-PINN, a pseudo label based self-training framework for training PINNs. To our best knowledge, this is the first research to combine self-training mechanisms and physics-informed learning. ST-PINN uses the physical equation to generate pseudo points and treats the pseudo points as a supervised data form, which makes full use of unlabeled data with physics information. * We design a strategy for pseudo-label generation and introduce three hyperparameters to control the quantity and quality of pseudo points. These three hyperparameters can stabilize the training process and trade-off efficiency and accuracy, which is essential in ST-PINN. * We conduct a series of experiments in five different PDEs. Compared with the original PINN, our model can improve the accuracy by about 1.33\(\times\)-2.54\(\times\). The experimental results demonstrate that the self-training mechanism can benefit network convergence and improve prediction accuracy. The remainder paper is organized as follows: In section II, we introduce the related works and background. Next, we describe the details of our method and the pseudo points generation strategy in section III. In section IV, we show our experimental environment and evaluate our method in different PDEs. Finally, we give the conclusions in section V. ## II Related Works Neural networks are one of the most well-known data-driven machine learning models widely used in various fields [19, 20]. Physics-informed neural networks (PINNs) are specific types of neural networks designed for solving physics problems. These networks not only learn from the distribution of supervised data but also aim to comply with the laws of physics. Compared to traditional neural networks [6, 7], PINNs incorporate physics governing equations into the loss function, enabling them to learn a more generalized model with fewer labeled data. Neural networks were first used to solve partial differential equations in the 1990s [21]. However, this approach did not gain much attention due to the limitations of hardware and computing methods. With the development of deep learning, Raissi et al. proposed the framework of PINNs, leading to significant subsequent research in this emerging cross-disciplinary field [8] and becoming a hotspot in scientific computation and artificial intelligence. There are several studies have shown that neural networks can converge to PDEs' solution under certain conditions. For instance, Shin et al. analyzed the consistency of using PINN to solve PDEs. They demonstrated that the upper bound on the generalization error is controlled by the training error and the number of training data in Holder continuity assumptions [11]. Similarly, Mishra et al. established a theory of an abstract framework that uses PINN to solve forward PDEs problem [12]. They obtained a similar error estimation result under a weaker assumption and generalized it to the inverse problem. Despite these promising results, the accuracy of PINNs is often not satisfactory enough, and they are difficult to train or fail to converge in certain situations [13]. To address the challenges of PINNs, several researchers have proposed new models and training algorithms to refine the original method [22, 23, 24, 25, 26, 27, 28]. For instance, Jagtap et al. proposed a nonlinear conservation law physics-informed neural network in the discrete domain (c-PINN) [24], which divides the solution domain into sub-domains connected by a conservation law. This strategy can improve efficiency and accuracy. However, the connection of the sub-domain needs to abide by conservation law. Therefore, Ameya et al. proposed the X-PINN model, which further expands the c-PINN to overcome the limitation of conservation law [25]. In another approach, Jeremy et al. proposed the g-PINN, which includes higher derivatives of equations in the loss function to provide extra information to train [26]. Their experiments suggest that these derivative forms can help the model converge. To account for different learning difficulties in different regions of the training domain, Wu et al. proposed a self-adapted points sampling algorithm that increases the sampling rate in high physics loss regions [27]. Since physics data in many fields often have some noise, Yang et al. proposed a Bayes physics-informed neural network (B-PINN) to predict accuracy values in noisy data [28]. Furthermore, PINNs have successfully been applied to practical problems in various fields, including fluid mechanics, mechanics of materials, power systems, and biomedical [29, 30, 31]. However, few studies have investigated applying semi-supervised learning methods to PINNs. Self-training, also known as self-labeling or pseudo-labeling training, is a primary method in semi-supervised learning [14, 15, 16, 17, 18]. The main idea behind self-training is to use the network's prediction as pseudo labels and convert unsupervised data into supervised data. This approach allows the model to learn from unlabeled data, which suits the scenarios where label generation is costly, such as solving PDEs. In this paper, we employ the self-training mechanism to improve the accuracy of the PINN network by enhancing its physics learning process. ## III Method We first define the general form of PDEs problem with boundary conditions as following: \[N\left[u(t,x);\lambda\right]+u_{t}=0,x\in\Omega\subset\mathbb{R}^ {d},t\in[0,T] \tag{1}\] \[B\left[u(t,x)\right]=0,\partial\Omega \tag{2}\] where \(u(t,x)\) denotes the solution to the PDE at time \(t\) and location \(x\); \(\lambda\) represents the unknown parameters in the equation; \(u_{t}\) is the equation forcing function; \(N[\cdot]\) is a non-linear differential operator; and \(B(\cdot)\) represents the boundary conditions, which can be Dirichlet, Neumann, or mixed. In PINNs, the initial condition can be treated as a Dirichlet boundary condition. ### _The Overview of the Framework_ Fig. 1 shows the architecture of the proposed network, which uses time \(t\) and spatial coordinates (e.g. \(x\), \(y\)) as inputs and predicts the physical fields (e.g. \(u\) and \(v\) for velocity fields in two directions and \(p\) for the pressure field) as outputs. There are two major processes involved in one iteration: training the network and generating pseudo labels. The blue line represents the training process of the network model, which is similar to the original PINNs. The sample points and boundary points are input to the network to generate the prediction, which computes the physics residual loss of the governing equations. Although PINNs can learn without supervised data, they usually require a few labeled intra-domain points to speed up convergence and improve accuracy. The unsupervised data learn the physical information by combining governing equations into the loss function and using the backpropagation algorithm to train the network. Some articles refer to combining governing equations into the loss function as a "soft constrain" because there is no guarantee that the network output can satisfy the PDEs. In our paper, we train the network by directly learning the residual between the pseudo labels and the prediction, which can learn more physical information. To further improve the training process and accuracy, we introduce a novel pseudo-labeling technique that integrates the generated pseudo labels into the training process. As shown in Fig. 1, the red line illustrates the main difference between the original PINN and ST-PINN. In ST-PINN, after several iterations, the network predicts all the sample points and uses these predictions to compute the residual loss of equation. By sorting the residual loss, the network selects part sample points with small equation residual loss for pseudo labeling, which will be trained as supervised data in the next iteration. At the end of each iteration, the proposed method generates new pseudo points through the above process. Therefore, the number of pseudo points will gradually increase during training. If the point-generating strategy is suitable, there will be a positive feedback loop between the pseudo points generation and the network training. These pseudo points can improve the accuracy of the network, and the more accurate the network becomes, the more pseudo points it can generate, leading to convergence. However, theoretical analysis of the self-training and the pseudo-label points requires extensive work, which is beyond the scope of this study. ### _Loss Function_ The training points of the network can be divided into several parts, each with its corresponding loss function. The first part consists of sample points randomly sampled from the spatiotemporal domain to learn the physics information. The residual loss of governing equations shown in (3), where \(f(t,x,u)=N\left[u(t,x);\lambda\right]+u_{t}\) is the governing equations, \(t_{f}\), \(x_{f}\) represent the randomly sampled points on the training domain, and \(\hat{u}_{f}\) is the prediction of the neural network. The network learns the physical information by minimizing the mean squared error of the equation residual, which is a crucial part of traditional PINNs. We aim to make the output of PINNs as compliant with the equation as possible by minimizing \(f\). \[L_{f}=\frac{1}{N_{f}}\sum_{i=1}^{N_{f}}\left|f(t_{f}^{i},x_{f}^{i},\hat{u}_{f}^ {i})\right|^{2} \tag{3}\] \[L_{d}=\frac{1}{N_{d}}\sum_{i=1}^{N_{b}}\left|\hat{u}_{d}^{i}-u_{d}^{i}\right|^{2} \tag{4}\] \[L_{p}=\frac{1}{N_{p}}\sum_{i=1}^{N_{p}}\left|\hat{u}_{p}^{i}-u_{p}^{i}\right|^ {2} \tag{5}\] \[L=W_{f}*L_{f}+W_{d}*L_{d}+W_{p}*L_{p} \tag{6}\] The second part comprises labeled data, which includes boundary and initial points, and sometimes a small number of intra-domain points. These points can be generalized as data points and calculated using (4), where \(\hat{u}_{d}\) and \(u_{d}\) represent the predictions and corresponding labels, respectively. The initial and boundary conditions are essential for PINNs to find the correct solution. Although PINNs can learn the PDE solution Fig. 1: The architecture of ST-PINN. The blue line displays the training process, while the red line presents the pseudo label generating process. without supervised data, adding a few labeled intra-domain points can improve convergence speed and accuracy. The third part, the loss function for pseudo points defined in (5), is the main difference between the original PINNs. Here, \(\hat{u}_{p}\) and \(u_{p}\) denote the predictions and corresponding pseudo labels, respectively. These pseudo labels are generated in previous iterations and represent high-confidence predictions. It is worth noting that the representation of the pseudo points and data points is similar, and they can use the same loss function defined in Equation (4). However, it is beneficial to separate them into distinct losses in practice and adjust the loss weights to improve accuracy. Furthermore, splitting these loss functions helps to analyze the behavior of the self-training mechanism. Finally, we compute the total loss by taking a weighted sum of the above losses, as shown in Equation (6). ### _The Pseudo Points Generation Strategy_ Selecting unsupervised data to generate pseudo labels is a crucial and subjective procedure in self-training [15]. In classification problems, the traditional approach for pseudo-labeling is to set a threshold on the classification confidence. At the end of each iteration, the unlabeled data with high classification confidence is assigned pseudo labels and included in the training set. As the training progresses, the number of pseudo points increases, and the network output becomes more accurate. Eventually, the unsupervised data becomes labeled, effectively transforming it into supervised data. However, solving PDEs is not a classification problem, and self-training cannot be applied to PINNs directly. Firstly, using the threshold is not always convenient since the confidence depends on the physics equations. Different problems need different threshold values, and even the same PDE with specific initial or boundary conditions needs to change thresholds. Moreover, sometimes it is challenging to choose the thresholds without experiments. Therefore, the ST-PINN uses a qualitative indicator to select the pseudo points, such as choosing the top \(q\)% of sample points as the pseudo points. The second problem is that generating pseudo points every iteration is unnecessary, leading to useless overhead and decreased efficiency. To address this, we propose reducing the frequency of pseudo point updates in ST-PINN, with the update frequency controlled by a hyperparameter \(p\). This approach strikes a balance between efficiency and accuracy. During training, the network parameters are updated at every iteration, and the pseudo points generated by the previous network may change in the new network. To ensure accurate propagation of physical information and avoid error accumulation, ST-PINN updates or replaces all pseudo points with the output of the new network, which differs from traditional self-training methods. Another challenge with selecting pseudo points based solely on the PDE is that adding the pseudo points residual into the loss function can cause fluctuations in the training process. To improve stability, ST-PINN introduces a stable coefficient, denoted by \(r\), which measures the number of consecutive times a candidate point remains stable before attaching a pseudo label. Specifically, during training, we maintain a window to keep track of the flags of all predictions, which records the number of consecutive candidate times. Pseudo labels are only attached once the flags of corresponding points surpass \(r\). By setting an appropriate value for \(r\), the network can improve the quality of pseudo labels and ensure the stability of the training process. The stable coefficient also serves as a mechanism to control the number of pseudo points: at the beginning of training, it limits the number of pseudo points to warm up the network, and as the iteration count increases, it allows the network to generate more and more pseudo points, acting as a pseudo points schedule. In general, our pseudo-labeling strategy introduces three hyperparameters: the update frequency \(p\), the maximum rate \(q\), and the stable coefficient \(r\). The pseudo points selection algorithm is described in Algorithm 1, which completes one iteration in five steps. First, the network generates predictions from sample points and computes their residual loss of equation (lines 3-4). Then, the ST-PINN sorts the candidate points by the loss and selects the top \(q\)% sample points as pseudo point candidates (line 6). After that, the proposed model updates the consecutive time flags and chooses candidate points if the flag surpasses the stable coefficient \(r\) as pseudo points (lines 7-14). Once the pseudo point generation completes, the ST-PINN resets the flag of non-candidates (lines 15-17). In Section 4, we provide a detailed evaluation of our method. ``` 1:\(t_{p},x_{p},u_{p}\leftarrow\emptyset\) 2:for\(i=0\) to \(N_{f}-1\)do 3:\(\hat{u}_{f}^{i}\gets h^{(k)}\left(t_{f}^{i},x_{f}^{i}\right)\) 4:\(L_{f}^{i}\gets f\left(t_{f}^{i},x_{f}^{i},\hat{u}_{f}^{i}\right)\) 5:endfor 6:idx = argPartition(\(L_{f},t_{f},x_{f},\hat{u}_{f},q\)) 7:for\(i=0\) to \(N_{f}\times q-1\)do 8: flag[idx[i]] \(+=1\) 9:if flag[idx[i]] \(>\)\(r\)then 10:\(t_{p}\gets t_{p}\cup t_{f}^{i}\) 11:\(x_{p}\gets x_{p}\cup x_{f}^{i}\) 12:\(u_{p}\gets u_{p}\cup\hat{u}_{f}^{i}\) 13:endif 14:endfor 15:for\(i=N_{f}*q\) to \(N_{f}-1\)do 16: flag[idx[i]] = 0 17:endfor ``` **Algorithm 1** Generation of pseudo points ## IV Experimental Results We conduct a series of experiments on five different PDEs from different fields and scenarios, including the Burgers equation, Different-Reaction equation, Different-Sorption equation, Shallow-Water equation, and Compressible Navier-Stokes equation. All the training data are downloaded from the PDEBench dataset [32]. The setting of equations and param eters are default. Our deep learning framework is TensorFlow 1.15. The accelerator is NVIDIA P100. ### _Burgers Equation_ The Burgers equation and the corresponding initial conditions can be described by (7) and (8), which model the non-linear behavior and diffusion process in fluid dynamics: \[\partial_{t}u\left(t,x\right)+\partial_{x}\left(u^{2}(t,x\right)/2 \right)=\nu/\pi\partial_{xx}u\left(t,x\right), \tag{7}\] \[u\left(0,x\right)=u_{0}\left(x\right),\ \ x\in(0,1) \tag{8}\] where \(\nu=0.01\) represents the diffusion coefficient. The boundary condition is periodic. (9) describes the initial condition, which is a superposition of sinusoidal waves: \[u_{0}\left(x\right)=\sum_{k_{i}=k_{1},\ldots,k_{N}}A_{i}\sin\left(2\pi n_{i}/L _{x}\right)x+\emptyset_{i} \tag{9}\] where \(L\) is the calculation domain size; \(n_{i}\), \(A_{i}\) and \(\emptyset_{i}\) are random sample values. \(n_{i}\) is a random integer in \([1,8]\); \(A_{i}\) is a random float number uniformly chosen in \([0,1]\), and \(\emptyset_{i}\) is a randomly chosen phase in \((0,2\pi)\). Note that \(N=2\) in this equation. The networks were trained over a spatiotemporal domain of \([0,1]\times[0,2]\) and discretized into \(N_{x}\times N_{t}=1024\times 256\) points. Both PINN and ST-PINN models utilized fully connected neural networks with four layers and 32 cells in each layer. We select the network structure by grid search. The number of boundary points was 512, and the number of initial points was 1024. We also put 1000 intra-domain labeled data points to improve training efficiency and accuracy. Both networks were trained for 20,000 iterations using the Adam optimizer, with a learning rate of \(10^{-3}\) and the activation function set to \(tanh\). Because the total number of points in the training domain (\(1024\times 256=262,144\)) was too large to input into the network at once, we randomly sampled 20,000 points for each iteration, which can be considered a mini-batch approach with a batch size of 20,000. Only sample points and pseudo points were generated within each batch, with other points (boundary, initial, and data points) included in training for every iteration. It's worth noting that unless specifically mentioned, the training environment and network configuration were kept the same for both PINN and ST-PINN models, ensuring a fair comparison of their solution prediction accuracy. As for the pseudo points generation strategy, the network updates at most 20% of the sample points every 100 iterations, and the stable coefficient is set as 10, which is a relatively conservative setting. Therefore, the number of pseudo points is small at the beginning of the training. This setting ensures the network does not deviate from the solution even if they learn the wrong information. Fig. 2 illustrates the prediction of the PINN and ST-PINN models at three different times (t=0.3, t=0.6, and t=1), where the blue line represents the ST-PINN and the red line expresses the PINN. Both networks can fit the solution but have difficulties predicting the peaks and troughs, particularly the PINN. This phenomenon becomes more evident as time increases, and at t=0.9, the PINN can only forecast the form of the prediction without fitting the wave accuracy. In contrast, the result obtained with ST-PINN is much better, with its prediction being nearer to the solution in almost every peak and trough. The relative L2 error for ST-PINN (6.60e-2) is also better than that for PINN (8.01e-2). One possible reason for this phenomenon is that wave peaks and troughs are complex parts of the solution to PDEs and are challenging to learn. Therefore, the self-training mechanism in ST-PINN helps it to learn more physical information. ### _Diffusion-Reaction Equation_ The one-dimensional diffusion-reaction equation and its corresponding initial conditions are expressed as follows: \[\partial_{t}u\left(t,x\right)-\upsilon\partial_{xx}u\left(t,x\right)-\rho u \left(1-u\right)=0, \tag{10}\] \[u(0,x)=u_{0}(x),\ \ \ x\in(0,1) \tag{11}\] where \(\upsilon\) is the diffusion coefficient, and \(\rho\) denotes mass density, which is set as 0.5 and 1, respectively. The boundary conditions and initial conditions are the same as those for Burgers equation, with periodic boundary conditions and the superposition of sinusoidal waves defined in (9) used as the initial condition. This equation combines a diffusion process and a rapid evolution from a source term, making it a challenging problem that can measure the network's ability to capture very swift dynamics. The spatiotemporal domain for training is \([0,1]\times[0,1]\), and is discretized into \(N_{x}\times N_{t}=1024\times 256\) points. The networks are trained using the Adam optimizer for 20,000 iterations, followed by further refinement using the Limited-memory Broyden-Fletcher-Goldfarb-Shanno Bound (L-BFGS-B) optimizer for a maximum of 5,000 iterations. L-BFGS-B is a second-order optimizer that can effectively decrease loss with limited memory expenditure. Other settings (e.g. the network structure and pseudo point generation strategy) are similar to the Burgers equation. Fig. 3 displays the variation of the loss function during the training process for the diffusion-reaction equation. The red line represents the loss of PINN, while the blue line corresponds to ST-PINN. At the beginning of training, the loss variation between PINN and ST-PINN is minimal. However, after 15k iterations, the self-training mechanism generates enough pseudo points to influence the training process, which helps the network learn more physical information, leading to a faster reduction of the loss to a lower level in ST-PINN. When using the L-BFGS-B optimizer with a total of 20k iterations, the loss in ST-PINN descends even faster. This phenomenon highlights the advantage of self-training in improving performance. From Fig. 3, we can see that the loss in PINN converges early, reaching its minimum value at approximately 23k iterations. In contrast, the loss in ST-PINN continues to decrease until the training stops. As a result, the final L2 error in ST-PINN (9.61e-3) is better than that in PINN (4.22e-2). ### _Different-Sorption Equation_ The one-dimensional diffusion-sorption equation models a diffusion process that is retarded by a sorption process: \[\partial_{t}u\left(t,x\right)\!=\!D/R\left(u\right)\partial_{xx}u\left(t,x \right)\!,x\!\in\!(0,1),t\!\in\!(0,500] \tag{12}\] where \(D=5\times 10^{-4}\) is the effective diffusion coefficient and \(R(u)\) describes the retardation factor representing the sorption that hinders the diffusion process: \[R(u)=1+\frac{1-\emptyset}{\emptyset}\rho_{s}kn_{f}u^{n_{f}-1} \tag{13}\] Here, \(\emptyset=0.29\) represents the porosity of the porous medium; \(\rho_{s}=2888\) denotes the bulk density; \(k=3.5\times 10^{-4}\) is Freundlich's parameter; and \(n_{f}=0.875\) is Freundlich's exponent. The boundary conditions are defined by: \[u(t,0)=1.0 \tag{14}\] \[u(t,1)=D\partial_{x}u\left(t,1\right) \tag{15}\] The initial condition is generated by random values with a uniform distribution. The discrete training domain is \(N_{x}\times N_{t}=1024\times 101\). For neural networks, we used the same architectures for both PINN and ST-PINN as we did for the Burgers equation and Diffusion-Reactive equation. Both models were trained for 20,000 iterations using the Adam optimizer. The relative L2 error in ST-PINN (9.63e-3) is better than PINN (2.45e-2). Fig. 4 shows the reference solution, prediction, and point-wise error in PINN and ST-PINN. The first two rows show the reference solution and prediction in both networks, which are similar to each other, indicating that they can predict the physics fields accurately. The third row shows the point-wise error compared to the reference solution in both networks. In PINN, the point-wise error at the bottom (\(x=0\)) and diagonal (\(x=0.5t\)) shows a significant deviation. However, this phenomenon is less pronounced in ST-PINN. Thus, the ability of ST-PINN to capture the physics details is better. ### _Shallow-Water Equations_ The two-dimensional shallow-water equation is expressed as (16) - (18): \[\partial_{t}h+\partial_{x}hu+\partial_{y}hu =0 \tag{16}\] \[\partial_{t}hu+\partial_{x}\left(u^{2}h+\frac{1}{2}g_{r}h^{2}\right) =-g_{r}h\partial_{x}b\] (17) \[\partial_{t}hv+\partial_{y}\left(v^{2}h+\frac{1}{2}g_{r}h^{2}\right) =-g_{r}h\partial_{y}b \tag{18}\] where \(u\) and \(v\) denote velocities in the horizontal and vertical directions, \(h\) describes the water depth (the main prediction in this problem), and \(g_{r}=1.0\) is the gravitational acceleration. These equations are derived from the general Navier-Stokes (N-S) equations and are widely used in modeling free-surface flow problems. The dataset presented in PDEBench is a 2D radial dam break scenario, which describes the evolution process of a circular bump with initialized water height in the center of a square domain. The initial condition for \(h\) is given by: \[h=\begin{cases}2.0,&for\;\;r\;<\sqrt{x^{2}+y^{2}}\\ 1.0,&for\;\;r\;\geq\sqrt{x^{2}+y^{2}}\end{cases} \tag{19}\] where \(r=0.1\). The training spatial dimension is \(\Omega=[-2.5,2.5]\times[-2.5,2.5]\), while the temporal dimension is \(T=[0,1]\). The dataset is discretized into \(N_{x}\times N_{y}\times N_{t}=128\times 128\times 101\). Fig. 3: The variation of the loss function. Fig. 2: The prediction of PINN and ST-PINN at three different times (t=0.3, t=0.6, and t=1). Unlike the one-dimensional case study introduced above, the two-dimensional problem is more complex. Therefore, we use a deeper network with more neural cells for solving these equations. The number of boundary points and initial points is also increased, with 1000 boundary points, 5000 initial points, and an additional 1000 intra-domain labeled data points used to train both networks. Both networks are trained using the Adam optimizer with 30k iterations. The learning rate for the first 10k iterations is set to 5e-3, the learning rate for the second 10k iterations is set to 1e-3, and the learning rate for the last 10k iterations is set to 5e-4. One significant difference in this case study is that the dataset only provides labeled data for \(h\), while the labels for \(u\) and \(v\) are unavailable. However, the shallow-water equation contains physical information about \(u\) and \(v\). Therefore, the ST-PINN can predict them and include the no-reference points in the self-training process. Fig. 5 displays the prediction of the ST-PINN and the corresponding reference. By incorporating these unlabeled physics data, ST-PINN can accurately forecast the 2D physics fields. The L2 error of \(h\) in ST-PINN (1.14e-2) is better than that of PINN (1.50e-2). ### _Compressible Navier-Stokes Equations_ The two-dimension compressible Navier-Stokes (N-S) equations is expressed as (20) - (22): \[\partial_{t}\rho+\nabla\cdot(\rho\mathbf{v})=0 \tag{20}\] \[\rho(\partial_{t}\mathbf{v}+\mathbf{v}\mathbf{\cdot}\nabla \mathbf{v})=-\nabla p+\eta\Delta\mathbf{v}+(\zeta+\eta/3)\nabla(\nabla\cdot \mathbf{v})\] (21) \[\partial_{t}\left[\epsilon+\frac{\rho v^{2}}{2}\right]+\nabla \cdot\left[\left(\epsilon+p+\frac{\rho v^{2}}{2}\right)\mathbf{v}-\mathbf{v} \cdot\sigma^{\prime}\right]=0 \tag{22}\] In the compressible Navier-Stokes (N-S) equations, \(\rho\) represents the mass density, \(v\) denotes the velocity, and \(p\) expresses the gas pressure. The internal energy \(\epsilon\) is given by \(\epsilon=p/(\Gamma-1)\), where \(\Gamma=5/3\) is the specific heat ratio. The Mach number is set to 0.1, and the viscous stress tensor \(\sigma^{\prime}\) accounts for the shear and bulk viscosity, with values of \(\eta=0.1\) and \(\zeta=0.1\), respectively. These equations are subject to periodic boundary conditions and random initial conditions. The N-S equation is a complicated equation that governs the mechanical law of viscous fluid flow, making it a rigorous test for the network's ability to understand and represent complex physical information. The dataset used in this problem is discrete into \(N_{x}\times N_{y}\times N_{t}=128\times 128\times 21\), with a training spatial dimension of \(\Omega=[-2.5,2.5]\times[-2.5,2.5]\) and a temporal dimension of \(T=[0,1]\). The input to the network is the coordinates \((x,y,t)\), and the output is the physical fields \((\rho,u,v,p)\) at those coordinates. The network architecture consists of eight layers with 64 cells in each layer, similar to the network used for the shallow-water equation. Note that the scales of \(\rho\) and \(p\) are greater than those of \(u\) and \(v\). We use mean square error (MSE) to evaluate \(u\) and \(v\) and the L2 error to evaluate \(\rho\) and \(p\). Tab. I shows the final results. Overall, the performance of ST-PINN is better than PINN in terms of \(\rho\), \(u\), and \(v\), demonstrating the network's ability to handle complex problems. Fig. 4: The reference solution, prediction, and the point-wise error of PINN and ST-PINN. Fig. 5: The reference solution and prediction of ST-PINN. ## V Conclusion We propose ST-PINN, which incorporates the self-training mechanism into physics-informed learning. The core of ST-PINN is to embed the residual of pseudo points into the loss function, thus extending the physical information by directly learning from the residual. The ST-PINN selects the most confident sample points as pseudo points using the physics governing equation and has three hyperparameters that control the number of pseudo points and the frequency of pseudo-label generation. These hyperparameters stabilize the training process and balance accuracy and efficiency. To our best knowledge, this is the first research to apply the semi-supervised learning method to physics-informed learning. We conducted experiments on five PDEs in various scenarios and applications. Compared to the original PINN, the proposed network improved accuracy by about 1.33\(\times\)-2.54\(\times\). These results demonstrate that the self-training mechanism can benefit network convergence and improve prediction accuracy. In the future, we plan to optimize the network architecture to enhance performance and generalization ability. Investigating relevant mathematical demonstrations of ST-PINN is also a future focus of our research.
2305.02954
Quantifying the magnetic interactions governing chiral spin textures using deep neural networks
The interplay of magnetic interactions in chiral multilayer films gives rise to nanoscale topological spin textures, which form attractive elements for next-generation computing. Quantifying these interactions requires several specialized, time-consuming, and resource-intensive experimental techniques. Imaging of ambient domain configurations presents a promising avenue for high-throughput extraction of the parent magnetic interactions. Here we present a machine learning-based approach to determine the key interactions -- symmetric exchange, chiral exchange, and anisotropy -- governing chiral domain phenomenology in multilayers. Our convolutional neural network model, trained and validated on over 10,000 domain images, achieved $R^2 > 0.85$ in predicting the parameters and independently learned physical interdependencies between them. When applied to microscopy data acquired across samples, our model-predicted parameter trends are consistent with independent experimental measurements. These results establish ML-driven techniques as valuable, high-throughput complements to conventional determination of magnetic interactions, and serve to accelerate materials and device development for nanoscale electronics.
Jian Feng Kong, Yuhua Ren, M. S. Nicholas Tey, Pin Ho, Khoong Hong Khoo, Xiaoye Chen, Anjan Soumyanarayanan
2023-05-04T15:57:37Z
http://arxiv.org/abs/2305.02954v1
# Quantifying the magnetic interactions governing chiral spin textures ###### Abstract The interplay of magnetic interactions in chiral multilayer films gives rise to nanoscale topological spin textures, which form attractive elements for next-generation computing. Quantifying these interactions requires several specialized, time-consuming, and resource-intensive experimental techniques. Imaging of ambient domain configurations presents a promising avenue for high-throughput extraction of the parent magnetic interactions. Here we present a machine learning-based approach to determine the key interactions -- symmetric exchange, chiral exchange, and anisotropy -- governing chiral domain phenomenology in multilayers. Our convolutional neural network model, trained and validated on over 10,000 domain images, achieved \(R^{2}>0.85\) in predicting the parameters and independently learned physical interdependencies between them. When applied to microscopy data acquired across samples, our model-predicted parameter trends are consistent with independent experimental measurements. These results establish ML-driven techniques as valuable, high-throughput complements to conventional determination of magnetic interactions, and serve to accelerate materials and device development for nanoscale electronics. ## 1 Introduction The advent of chirality in magnetic thin films has created a vast zoo of nanometre-scale spin textures, including spin spirals, stripes, skyrmions, and beyond [1, 2, 3, 4]. Their ambient stability, electrical malleability, and material compatibility with established fabrication techniques, has elicited growing technological interest [2, 3, 5], especially as elements for sustainable computing architectures [6, 7, 8]. The formation of these spin textures, as well as their critical static and dynamic attributes are governed by the interplay of three key magnetic interactions [9, 10, 11]. The exchange stiffness, \(A\), characterizes uniform ferromagnetic order, the effective anisotropy \(K_{\rm eff}\), describes its preferred orientation, and the interfacial Dzyaloshinskii-Moriya interaction (DMI, \(D\)), determines the extent of chirality [9]. Quantifying these interactions is imperative both for elucidating the behaviour of spin textures, and for designing functional materials and devices to exploit their properties [5]. The experimental determination of these interactions typically involves several independent measurements. Firstly, \(K_{\rm eff}\) can be obtained by combining magnetometry measurements across in-plane (IP) and out-of-plane (OP) sample orientations [12]. Next, as \(A\) and \(D\) relate to the symmetric and anti-symmetric components of spin-wave dispersion respectively, both can in principle be independently extracted via wavevector-resolved Brillouin light scattering (BLS) spectroscopy [13, 14]. However, this approach is challenging for several reasons: (a) BLS is a specialized, resource-intensive technique; (b) its evaluation has implicit dependencies on two additional techniques; and (c) the measured \(A\) can be ambiguous in films with sizable DMI or dipolar effects [15]. Alternatively, if \(A\) is known, \(D\) can be extracted from the measured asymmetry in domain wall (DW) propagation, driven by external electric or magnetic fields, albeit with several inter-dependencies [13, 16, 17]. However, while \(A\) can be estimated reliably for thicker films using the Bloch \(T^{3/2}\) law for temperature-dependent magnetization [18, 19], this method does not work well for the ultrathin (1 nm) limit relevant to chiral multilayers [15]. Overall, given the considerable challenges and complexities of direct experimental determination, a high throughput technique enabling one-stop-shop estimation of all three parameters would be of immense value. Since the magnetic parameters together determine the domain morphology, it should encapsulate material information. However, while the forward problem of determining the equilibrium domain configuration using the input parameters is micromagnetically well-defined [20, 21], the inverse problem of determining parent magnetic interactions from domain morphology is extremely challenging. Unsurprisingly, prior attempts have been largely limited to quantifying a single feature, e.g. domain periodicity, and thereby estimating the ratio \(D/A\) via linear regression [10, 11, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 231, 232, 242, 251, 252, 26, 271, 282, 292, 293, 294, 295, 296, 297, 300, 311, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 328, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 352, 358, 359, 370, 353, 359, 361, 362, 363, 364, 365, 366, 367, 368, 369, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 400, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 99, 111, 12, 13, 14, 15, 16, 17, 18, 19, 19, 19, 12, 14, 16, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 43, 42, 43, 45, 46, 47, 48, 49, 50, 52, 54, 53, 54, 55, 56, 57, 58, 59, 60, 62, 63, 64, 65, 66, 67, 68, 69, 70, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 90, 91, 100, 111, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 63, 64, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 90, 91, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 42, 43, 4 supervised regression model using convolutional neural networks (CNN), and train it on micromagnetically simulated images that exclude information inaccessible to common microscopy tools. By learning physical features such as domain boundaries, the model performs well on simulated validation data (\(R^{2}>0.85\)). When tested on domain images obtained from a series of chiral multilayers, the model predicts parameter trends consistent with independent experimental estimates. This paves the way for high-throughput characterization of technologically relevant magnetic thin films. ## 2 Methodology & Training The choice of ML model should optimize the learned relationship between input data features and desired outputs. For our purpose, a CNN model is suitable as it exploits many physical properties relevant to domain configurations [36, 37, 38], such as translation invariance and local correlations arising from short-range exchange interactions. In addition, CNN confers several practical benefits such as enabling variable-sized input images, and providing the potential for physical interpretability. The schematic in Fig. 1(a) outlines the ML workflow used in this work. First, a large set of ground-state domain images of OP magnetization was generated by performing micromagnetic simulations with varying input parameters. Next, these images were used to train our ML model, following post-processing to remove experimentally-inaccessible information. Finally, the model performance was evaluated by testing on an independent set of simulated images, and on experimental images acquired using a commercial magnetic force microscope (MFM). An ML model requires a large volume of high-quality training data to reliably and effectively learn input-output correlations. In line with previous works, which used \(\gtrsim 10^{4}\) domain configurations for training [32, 29, 35], our CNN model was trained on a large dataset generated by micromagnetic simulations using \(\text{mumax}^{3}\)[21]. Each of the input parameters, \(A_{0}\), \(K_{\text{u},0}\) (\(K_{\text{u}}=K_{\text{eff}}+\mu_{0}M_{\text{s}}^{2}/2\), where \(M_{\text{s}}\) is the saturation magnetization), and \(D_{0}\) was varied over a wide range for a multilayer comprising four repetitions of a chiral stack under the effective medium approximation [10]. The propensity for chiral domain formation can be characterized by the stability parameter, \(\kappa\), defined as [11, 9, 24]: \[\kappa=\pi D/\,4\sqrt{AK_{\text{eff}}}. \tag{1}\] Within our simulations, multi-domain states form for \(\kappa\gtrsim 0.1\), and proliferate with increasing \(\kappa\)[24]. Meanwhile, the stripe width increases with \(A\), \(K_{\text{eff}}\), and \(D^{-1}\). Thus generated equilibrium magnetization configurations exhibiting multi-domain states were used for the training dataset. To reliably predict outputs using commonly available magnetic microscopy techniques (e.g., MFM, Kerr microscopy), the ML model was trained only on the OP component, \(m_{z}\), of normalized spatial magnetization, as exemplified in Fig. 2(a). The training data were augmented by performing rotations on input domain images, consistent with the underlying symmetries of the problem [39]. Our model is configured to predict three independent parameters - \(A^{\prime}\), \(K^{\prime}_{\text{eff}}\), and \(D^{\prime}\) - and one dependent parameter, \(\kappa^{\prime}\). Note that \(K^{\prime}_{\text{eff}}\) is used as the output parameter (instead of \(K_{\text{u}}\)) to enable direct comparison with experiments. Meanwhile, \(\kappa^{\prime}\) is included due to its physical significance [11, 24], and to check whether the model is able to uncover the underlying constraints between the input parameters. The CNN architecture employed in our work, shown in Fig. 1(b), uses ReLU and leaky ReLU as non-linear activation functions for all intermediate layers, and includes dropout layers to reduce overfitting [40]. The four max-pooling layers downsize the input by retaining the most relevant features in each patch, and are known to be effective in training CNN models [41]. ## 3 Model Performance Allowing the model to train on as-simulated \(m_{z}\) images produced unbelievably good initial validation results (\(R^{2}\geq 0.98\), SM SSS2), albeit with poor general Figure 1: **Workflow and Model Architecture.****(a)** Schematic of the training and prediction workflow. Left: the model is trained using domain images generated by micromagnetic simulations (input parameters: \(A_{0},K_{\text{u},0},D_{0}\)), post-processed to remove experimentally inaccessible information. Right: The trained model is used to predict four magnetic parameters \((A^{\prime},K_{\text{eff}},D^{\prime},\kappa^{\prime})\) on MFM-measured domain images. **(b)** Schematic of the CNN model architecture, consisting of 10 convolutional layers, followed by 4 fully connected layers. The final layer is fully connected, with 4 neurons corresponding to the output parameters. See Methods for full details of the final model, including convolutional filters, max pooling layers, kernels, neurons etc. ization to experimental data. This discrepancy arises from key differences in the short lengthscale (\(<10\,\)nm) information encoded in the simulated images c.f. experimental images. First, the DW width (\(\sim 5\,\)nm speckles), which relates to \(\sqrt{A/K_{\rm{s}}}\)[42, 43], is accessible within simulated images, but cannot be resolved in experiments (resolution \(\sim 30\,\)nm). Second, the thermal noise in mumax3-simulated images (\(\sim 5-10\,\)nm), found to vary monotonically with \(K_{\rm{eff}}\), may also leak experimentally-inaccessible information on the magnetic parameters (SM SS2). Meanwhile, the short lengthscale noise present in typical experimental images (Fig. 5) is unrelated to magnetic properties. To enable accurate prediction for experimental images, we remove experimentally inaccessible data by judiciously pre-processing the simulated images, via median filtering and thresholding procedures (Fig. 2(b-c)). The post-processed dataset of 12,000 simulated images was divided into 3 subsets for training (80%), validation (10%), and testing (10%). Models with different hyperparameters (e.g., number of layers, filters, loss function, etc.) were individually trained, and subsequently, their performance was compared based on their predictions on the validation dataset (SM SS1). The best-performing model thus identified was then evaluated using the testing dataset (Fig. 3). In general, the models were found to learn more effectively with diminishing learning rates. Fig. 3(a) shows the evolution of the training and validation losses for the chosen model during the training process. Both losses are found to decrease with the number of epochs trained, and the mean absolute error (MAE) asymptotes to a small, finite value after \(\sim 400\) epochs. Crucially, the monotonic decrease of the validation loss indicates that the chosen model did not overfit the training data. Fig. 3(b-e) summarizes the performance evaluation of chosen model on the testing dataset. The parity plots compare the predicted values for each of the output parameters, \((A^{\prime},K^{\prime}_{\rm{eff}},D^{\prime},\kappa^{\prime})\), with the respective input values, \((A_{0},K_{\rm{eff,0}},D_{0},\kappa_{0})\), provided for the simulations. The consistently high \(R^{2}\) for all determined parameters suggests that the ML model has successfully learned the intricate relationship between the domain configurations and input parameters. As an additional test to examine the model's ability to learn implicit micromagnetic physics, we compare in Fig. 3(f) the predicted kappa, \(\kappa^{\prime}\), with that resulting from Eq. 1 via the predicted values of the other three parameters, \(\kappa^{*}=\pi D^{\prime}/4\sqrt{A^{\prime}K^{\prime}_{\rm{eff}}}\). The excellent correlation between \(\kappa^{\prime}\) and \(\kappa^{*}\) (\(R^{2}>0.98\)) suggests that while the Figure 3: **Model Validation and Performance Testing.****(a)** Model loss during training and validation, quantified via mean absolute error (MAE) across training epochs. **(b-e)** Parity plots for the trained model on the test dataset that compare input (\(x\)-axis) and predicted (\(y\)-axis) values for (b) \(A\), (c) \(K_{\rm{eff}}\), (d) \(D\), and (e) \(\kappa\). The parity line (\(y=x\) line) is shown for reference. (D) Parity plot for the predicted \(\kappa^{\prime}\) against \(\kappa^{*}=\pi D^{\prime}/4\sqrt{A^{\prime}K^{\prime}_{\rm{eff}}}\) (Eq. 1) calculated from individually predicted parameters. Figure 2: **Simulated Images and Processing.****(a)** Sample images (grayscale: \(m_{x}\in[-1,1]\)) generated by micromagnetic simulations with varying input parameters (\(A_{0}\) in pJ/m, \(D_{0}\) in MJ/m\({}^{3}\), \(K_{\rm{eff}}\) in mJ/m\({}^{2}\)). **(b)** Post-processing of simulated images for model training. Left image: as-simulated from micromagnetics, including experimentally inaccessible domain wall (DW) and thermal field (fuzziness) information related to magnetic parameters. Middle image: output of median filtering, which removes fuzziness within domains. Right image: output of Otsu thresholding, wherein DW width information is removed. **(c)** Comparison of a representative line cut (dashed yellow line) across the three images in (b), showing the effects of the post-processing steps. model does not receive Eq. 1 as an explicit input, it is nevertheless able to accurately learn this implicit constraint during training, and implement it across predicted parameters. To visualize the functioning of the CNN, we examine the intermediate outputs of the convolutional filters - known as feature maps (Fig. 4(a)) - as applied to the input. In practice, the CNN uses 1664 filters, and we display in Fig. 4(b-d) a few representative feature maps that exemplify the pattern evolution across the neural network. While the outputs of the initial layers preserve the low-level spatial configurations from the input image, the deeper layers get progressively more abstract. For example, the feature map for the 2nd layer (Fig. 4(b)) clearly shows the DWs delineating regions of opposite magnetization. Moving to the 5th layer, the domains are still visible, but the orientation of magnetization is less clearly distinguished. Finally, by the 8th layer, individual domains cannot be visually identified, and the observed speckles are abstract features used by the model to reach its final decision. Such behavior is consistent with the known progression of feature abstraction in deep neural networks [38]. ## Appendix D Experimental Predictions Having comprehensively tested the trained model with simulated data, we proceed to evaluate its performance on experimentally acquired microscopic images from chiral multilayer films. We use an established multilayer stack - Ir(1)/Fe(\(a\))/Co(\(b\))/Pt (Fig. 5(a)) - wherein key magnetic parameters can be tuned by varying the Fe(\(a\))/Co(\(b\)) thicknesses, while keeping their total thickness constant (1 nm) [11, 24]. The three samples studied in this work comprise four repetitions of stacks with varying Fe(\(a\))/Co(\(b\)) compositions. The phenomenology and energetics of nanoscale equilibrium domain configurations (\(\sim 100-300\) nm, Fig. 5(b)) in such DMI-dominated stacks (\(D>1\) mJ/m\({}^{2}\)) is qualitatively distinct from micron-scale domain configurations examined in prior ML works [32, 33, 35]. To image their domain morphology, we employ MFM, a commonly accessible, ambient, high-spatial-resolution (\(\sim 30\) nm) imaging technique, wherein measured contrast is proportional to the domain stray field gradient. Close examination of a representative MFM image (Fig. 5(c): left) reveals speckle noise and markedly reduced contrast c.f. simulated images. Therefore, MFM images are pre-processed following similar median filtering and thresholding procedures as used for the simulated images (Fig. 2(b)). Post-processed MFM images (Fig. 5(c): right) exhibit domain configurations similar to simulated images, which is further confirmed by corresponding MFM simulations (SI SS3) Fig. 6 compares the parameters predicted by the ML model using experimentally acquired MFM images (black) on the samples with independently measured values (red). The latter set of values is obtained using a combination of magnetometry, BLS spectroscopy, and microscopy experiments, and are consistent with published results on these Fe(\(a\))/Co(\(b\)) sample compositions [11, 15, 24, 44]. The error bars for the values predicted by the ML model represent the variance of five identical models trained using different initialization seeds (five-fold cross validation). Overall, the ML Figure 4: **Feature Maps.** **(a)** Schematic of the convolution operation of CNN, and its correspondence to a feature map. **(b-d)** Representative feature maps for the (b) 2nd (right inset: zoom-in), (c) 5th, and (d) 8th convolutional layers, with chosen filter numbers indicated. Color represents activation strength. Figure 5: **Imaged Multilayer Samples.** **(a)** Stack structure of Ir/Fe(\(x\))/Co(\(y\))/Pt samples. Varying Fe(\(x\))/Co(\(y\)) thickness tunes the magnetic parameters. **(b)** MFM-measured zero field domain image of Ir/Fe(2)/Co(8)/Pt sample. **(c)** Processing of MFM-measured domain images for ML model testing. Left: measured MFM image (from (b): highlighted rectangle); centre: after median filtering; and right: after Otsu thresholding (see Methods). predicted and measured parameters exhibit qualitatively consistent trends. With increasing Fe(\(a\))/Co(\(b\)) ratio (left to right), both measured and predicted values of \(A\) and \(K_{\text{eff}}\) decrease, \(D\) remains relatively constant, and the resulting \(\kappa\) increases. On the quantitative aspect, we observe excellent agreement for \(A\). However, the measured and predicted values for \(K_{\text{eff}}\), \(D\), and \(\kappa\) exhibit varying extents of discrepancy. Such quantitative discrepancy may arise from predictive limitations of the model, or due to differences between the training and testing datasets used in the work. To distinguish these sources, we tested the model on images simulated using experimentally measured sample parameters (Fig. 6: red) as inputs. As seen in Fig. 6 (blue c.f. red), all model-predicted parameters from these images exhibit excellent quantitative correspondence with the measured parameters, confirming the self-consistency of the model and reliability of the training procedure. This suggests that the likely source of the discrepancy is limitations inherent to the simulated dataset, which, while visually similar to experimental MFM images, exhibit several observable differences (SM SS4). Such differences are expected to arise from a combination of several factors unaccounted within simulations, including (a) extrinsic factors, such as polycrystalline grain structures of the sputtered films, which result in granularity of magnetic interactions, as well as defects and pinning; (b) intrinsic factors, e.g., interlayer coupling, proximity-induced effects, higher-order anisotropy etc.; as well as (c) differences in recipes between simulations and measurements. Overall, these results establish the feasibility of ML-driven characterization of the key parameters in chiral magnetic films using microscopic images. The key bottleneck to full quantitative consistency between ML-predicted and experimentally measured magnetic parameters is a high-fidelity, large volume training dataset of simulated domain images sufficiently representative of experimental images. Future simulated datasets generated for training can incorporate granularity to mimic spatial variations in magnetic properties [45, 46], interlayer exchange coupling effects [47], and field recipes [24] to better emulate experimentally measured images. Meanwhile, ML-based techniques can serve as able complements to experimental measurements, especially in circumventing highly resource-intensive determination of magnetic parameters [15]. ## Appendix E E. Conclusion In conclusion, we have built an ML model to estimate the key magnetic parameters governing nanoscale chiral domain phenomenology in multilayers using zero field MFM images. The CNN-based model was trained and optimized using micromagnetically simulated domain images, processed to remove experimentally inaccessible information. It achieved prediction \(R^{2}>0.85\) for all parameters of interest - \(A\), \(K_{\text{eff}}\), \(D\), and \(\kappa\) - without overfitting the data, and accurately deduced the relationship connecting \(\kappa\) to the other three parameters. When used on MFM-measured domain images, the trained model gives reasonable predictions for the evolution of parameters across studied samples. Full quantitative consistency for all magnetic parameters can be achieved by enhancing the simulated training dataset to additionally incorporate extrinsic and intrinsic factors affecting imaged domain configurations, to achieve rapid all-ML characterization of chiral multilayers. Our work advances ML-based parameter prediction for multilayer films from individual determination on microscale domain configurations [32, 33, 34, 35] to collective, or "all-in-one" parameter determination in the chirality-dominated regime of functional nanoscale spin textures. The demonstrated quantitative consistency in predicting \(A\) is particularly remarkable given its large variability across used techniques, and resource-intensive needs of reliable BLS characterization [15]. Therefore, our built model can already be used as a valuable complement to experiments for quantifying \(A\), as well as a quick, inexpensive "first-pass" estimation technique for all parameters. Our workflow can be straightforwardly extended to complementary magnetic imaging techniques from the micron-scale (Kerr) to the atomic-scale (electrons, X-rays). With appropriate incorporation of complementary data, it can also be generalized to extract other relevant parameters of interest, such as interlayer coupling, damping, spin Hall angle etc. By providing the ability to unlock information stored in domain configurations, our work paves the way for sustainable, high-throughout development of ultrathin magnetic films and devices for next-generation electronics. ## Appendix F. Methods **Micromagnetic Simulations** of a four-repeat stack were performed using the GPU-based package mumax\({}^{3}\)[21] un Figure 6: **Measured & Predicted Parameter Trends.** Comparison of magnetic parameters – (a) \(A\) (b) \(K_{\text{eff}}\), (c) \(D\), and (d) \(\kappa\) – for three Ir/Fe(\(x\))/Co(\(y\))/Pt samples. Plots show parameters obtained from experimental measurements (red), ML model prediction on measured MFM images (black), and ML model prediction on simulated images using measured parameters as inputs (blue). der grain-free conditions to generate a large set of realistic domain images to train, validate, and test the CNN model. The effective medium approximation, which treats all layers within each stack repetition as a single effective magnetic layer [10], was used to reduce the overall simulation time. Simulations were carried out on a finite-difference lattice of \(1024\times 1024\times 4\) cells, which for a cell size of \(2\times 2\times 3\,\mathrm{nm}^{3}\), corresponds to a physical sample size of \(\sim 2\,\mathrm{\SIUnitSymbolMicro m}\times 2\,\mathrm{\SIUnitSymbolMicro m}\times 1 2\,\mathrm{nm}\). The ranges of simulation parameters were chosen to cover the ranges of values of experimentally accessible stacks [11; 24], and are given in Tbl. I. Within these bounds, the simulation parameters were uniformly sampled using a grid. Note that when \(A\) or \(K_{\mathrm{eff}}\) are too large, or \(D\) is too small, the equilibrium domain configuration is a uniform magnetized state. Such parameter combinations that do not produce multi-domain magnetization configurations at ZF were excluded from the image dataset. Finally, for each set of parameters, the magnetization image used for analysis was extracted from the second layer out of the four simulated layers after verifying that the magnetization is approximately layer-independent across the used parameters. **Image Processing.** Standard routines from the CV2 and Scikit-Image Python libraries were used to pre-process our domain images before training the model. First, a median filter with a large kernel (radius: 8 pixels) was used to remove isolated salt and pepper noise. Next, the smoothness of the domain boundary was considered, which, for MFM-measured images, is sensitive to the imaging conditions. Since the smoothness is largely insensitive to the intrinsic magnetic properties, one would not expect it to measurably affect the parameter predictions. However, during training, the ML models were found to be sensitive to the domain boundary curvature. Therefore, the image-processing was used to ensure that the domain boundaries were consistently smooth across both simulation and experimental images. Successive median filters with a small kernel (radius: 3 pixels) were applied to gradually smoothen the domain boundaries along their length. For the last step, the image was filtered using a binary threshold, converting it from grayscale into black-white, eliminating any speckle noise and DW information. The resulting DWs are sharp, and the magnetization within a domain is effectively uniform. The MFM-measured images were found to be more sensitive to the choice of threshold than simulated images. To ensure accurate and reproducible thresholding, the Otsu method [48] was employed to determine the optimal threshold for each image (\(\sim 126-127\) on the \(0-255\) scale). **Model Architecture & Training.** Our optimized CNN model, determined following several rounds of hyperparameter optimization, consists of 10 convolutional layers followed by 4 fully connected layers. All convolution filters are \(3\times 3\) in size with strides of 1. Max pooling is applied after the \(2^{\mathrm{nd}}\), \(4^{\mathrm{th}}\), \(7^{\mathrm{th}}\), and \(10^{\mathrm{th}}\) convolution layers, with kernels of size \(2\times 2\) and strides of 1. The number of convolutional filters doubles after each pooling layer. The subsequent 3 fully connected (dense) layers all have equal width of 256 neurons, and are enhanced with a small dropout rate. The last layer consists of a fully connected layer with 4 neurons corresponding to the 4 output parameters of interest. The last layer is the output layer, which consists of 4 neurons corresponding to the 4 magnetic parameters of interest. Leaky ReLu with a small slope of 0.2 was used between the fully connected layers. Dropout layers are included to reduce overfitting by randomly removing some nodes (and all their incoming and outgoing connections) during training [40]. The model was trained for 400 epochs, with mean absolute error (MAE) as the loss function. 9,600 images were used for training, while the remaining 2,400 images were used for validation and testing. A learning rate scheduler was used during the training, such that the learning rate started large and decreased gradually as the training progressed. This allows the ML model to learn quickly at the start, while being able to fine-tune its weights as the learning plateaus. **MFM Imaging** of the multilayer thin films was performed using the Veeco Dimension(tm) 3100 scanning probe microscope. SSS-MFMR(tm) tips with sharp tip profile (diameter: \(30\,\mathrm{nm}\)) and ultra-low magnetization (Co-alloy coating, \(80\,\mathrm{em}\mathrm{cm}/\mathrm{cm}^{3}\)) were used for high-resolution imaging with minimal stray field perturbations. All MFM images were acquired at tip-sample lift heights of \(20\,\mathrm{nm}\), with an image resolution of \(2048\times 2048\) pixels for \(4\times 4\,\mathrm{\SIUnitSymbolMicro m}\) scan dimensions. The multilayers were imaged at zero field, following _ex situ_ out-of-plane saturation at -400 mT. We acknowledge helpful inputs from Hang Khume Tan, Constantin C. Chirila, and Nathaniel Ng, as well as the support of the National Supercomputing Centre (NSCC), Singapore, for computational resources. This work was supported by the SpOT-LITE program (A*STAR Grant No. A18A6b0057), funded by Singapore's RIE2020 initiatives.
2301.05266
Improving Reliability of Spiking Neural Networks through Fault Aware Threshold Voltage Optimization
Spiking neural networks have made breakthroughs in computer vision by lending themselves to neuromorphic hardware. However, the neuromorphic hardware lacks parallelism and hence, limits the throughput and hardware acceleration of SNNs on edge devices. To address this problem, many systolic-array SNN accelerators (systolicSNNs) have been proposed recently, but their reliability is still a major concern. In this paper, we first extensively analyze the impact of permanent faults on the SystolicSNNs. Then, we present a novel fault mitigation method, i.e., fault-aware threshold voltage optimization in retraining (FalVolt). FalVolt optimizes the threshold voltage for each layer in retraining to achieve the classification accuracy close to the baseline in the presence of faults. To demonstrate the effectiveness of our proposed mitigation, we classify both static (i.e., MNIST) and neuromorphic datasets (i.e., N-MNIST and DVS Gesture) on a 256x256 systolicSNN with stuck-at faults. We empirically show that the classification accuracy of a systolicSNN drops significantly even at extremely low fault rates (as low as 0.012\%). Our proposed FalVolt mitigation method improves the performance of systolicSNNs by enabling them to operate at fault rates of up to 60\%, with a negligible drop in classification accuracy (as low as 0.1\%). Our results show that FalVolt is 2x faster compared to other state-of-the-art techniques common in artificial neural networks (ANNs), such as fault-aware pruning and retraining without threshold voltage optimization.
Ayesha Siddique, Khaza Anuarul Hoque
2023-01-12T19:30:21Z
http://arxiv.org/abs/2301.05266v1
# Improving Reliability of Spiking Neural Networks through Fault Aware Threshold Voltage Optimization ###### Abstract Spiking neural networks have made breakthroughs in computer vision by lending themselves to neuromorphic hardware. However, the neuromorphic hardware lacks parallelism and hence, limits the throughput and hardware acceleration of SNNs on edge devices. To address this problem, many systolic-array SNN accelerators (systolicSNNs) have been proposed recently, but their reliability is still a major concern. In this paper, we first extensively analyze the impact of permanent faults on the SystolicSNNs. Then, we present a novel fault mitigation method, i.e., fault-aware threshold voltage optimization in retraining (FaVoI). FaVoI optimizes the threshold voltage for each layer in retraining to achieve the classification accuracy close to the baseline in the presence of faults. To demonstrate the effectiveness of our proposed mitigation, we classify both static (i.e., MNIST) and neuromorphic datasets (i.e., N-MNIST and DVS Gesture) on a 256x256 systolicSNN with stuck-at faults. We empirically show that the classification accuracy of a systolicSNN drops significantly even at extremely low fault rates (as low as 0.012%). Our proposed FaVoI mitigation method improves the performance of systolicSNNs by enabling them to operate at fault rates of up to 60%, with a negligible drop in classification accuracy (as low as 0.1%). Our results show that FaVoI is 2x faster compared to other state-of-the-art techniques common in artificial neural networks (ANNs), such as fault-aware pruning and retraining without threshold voltage optimization. Spiking neural networks, Stuck-at faults, Systolic array, Fault mitigation. ## I Introduction Spiking neural networks (SNNs) are a promising third generation of neural networks that ensure high algorithmic performance at low power. Their hardware acceleration require specialized architectures such as, SpiNNaker [1], and TrueNorth [2]. However, these architectures lack parallelism in each core and efficient dataflows for maximizing the reuse of weight data. This limits their achievable throughput and robustness in resource-constrained devices (e.g., battery-driven autonomous cars). Towards this, leveraging SNNs on massively parallel hardware accelerators such as systolic arrays has proven to be an efficient solution [3, 4, 5, 6, 7]. Systolic array SNN accelerators (systolicSNNs) are inspired by other state-of-the-art hardware accelerators [8] which support fully parallel execution of artificial neural networks (ANNs). These accelerators have a \(N\)x\(N\) dense grid of interconnected processing elements (PEs), which allows efficient parallel processing with the high spatio-temporal locality. Unlike ANNs, SNNs and their hardware accelerators are still in a relatively early phase of adoption [9] and thus ensuring the reliability of systolicSNNs is still considered a major research challenge. The systolicSNN hardware chips are manufactured using nanometer CMOS technologies [10], which require a highly sophisticated manufacturing process. The imperfections in this process result in various manufacturing defects ranging from process variations to permanent faults such as stuck-at faults. The stuck-at faults affect the output of systolicSNNs in every execution cycle and hence, lead to significant accuracy loss as discussed in this paper. Furthermore, the impact of large-scale failures such as dead synapse faults in SNNs has been thoroughly investigated [11, 12]. However, analyzing such failures in the hardware require a fault model with higher abstraction to make the simulation traceable. Guo et al. investigated the fault resilience of SNNs trained with different coding schemes by using a synaptic stuck-at fault model [13]. El-Sayed et al. analyzed the effect of these faults in a transistor-level design of leaky-integrate-and-fire (LIF) neuron [14]. Other state-of-the-art works focus on bit flips in weight memories [15, 16, 17, 18]. Conversely, the impact of stuck-at faults on systolicSNNs has not been investigated. The stuck-at faults are usually detected using post-fabrication testing for discarding the faulty manufactured chips. However, if a high number of manufactured chips are faulty, discarding them reduces the yield to a large extent. A potential solution is employing redundant executions (re-execution) to ensure correct outputs, but it leads to significant latency and energy overheads [17]. In the current resource-constrained nanoscale hardware paradigm, where the number of PEs has drastically increased to meet the robustness requirements of the end users, it is imperative to maximize the yield with an efficient and fault-tolerant systolicSNN. Recently, Mehul et al. proposed an astrocyte self-repair mechanism for stuck-at 0 weights in SNNs [19]. Other works are either focused on mitigating the transient faults in SNNs [16, 19] or contemplated permanent fault mitigation in ANN accelerators [20, 21, 22, 23]. However, a considerable research gap exists in mitigating the impact of permanent faults in systolicSNNs. _Norel contributions:_ In this paper, we present an extensive stuck-at fault vulnerability analysis and a novel fault mitigation method i.e., _fault-aware retraining through threshold voltage optimization (FaVoI)_. FaVoI first sets the weights mapped to faulty PEs only as zero and then retrains weights mapped to non-fault PEs while optimizing the threshold voltage for each layer to restore the classification accuracy close to its baseline. The optimized threshold voltage differs from the actual threshold voltage used in initial training. To demonstrate the effectiveness of our proposed FalVolt mitigation method, we used both static MNIST [24], and neuromorphic N-MNIST [25] and DVS128 Gesture [26] datasets. Our results show that FalVolt can operate at high fault rates of up to 60% with a negligible impact on the classification accuracy compared to its baseline. We empirically show that FalVolt takes 2x fewer retraining epochs, and thus it is 2x faster in restoring the baseline accuracy compared to other state-of-art techniques such as fault-aware pruning and retraining. Note, fault-aware pruning and retraining and threshold voltage optimization have been conventionally used for ANN fault mitigation [21, 22] and faster SNN convergence. However, to the best of our knowledge, this is the first work to employ fault-aware threshold voltage optimization for fault mitigation in SNNs. The remainder of this paper is structured as follows: Section II provides the preliminary information about SNNs and systolicSNNs. Section III and Section IV present a motivational case study and the proposed FalVolt mitigation method for systolicSNNs, respectively. Section V discusses the results for the fault vulnerability and mitigation. Finally, Section VI concludes the paper. ## II Background This section provides a brief overview of the state-of-the-art SNNs and systolicSNNs for better understanding. **Spiking Neural Networks**: SNNs are bio-inspired artificial neural networks. Their working principle can be explained with a standard LIF model as follows: when the membrane potential \(V_{t}\) of a presynaptic neuron exceeds a specific threshold voltage at time \(t\), a post-synaptic spike is fired, and then, \(V_{t}\) relaxes to the resting state (\(V_{rest}<\) threshold voltage) with a time constant \(\tau\). \(V_{t}\) maintains the resting state for a refractory time \(t_{ref}\) before responding to the received spikes. The LIF-based SNNs learn the presynaptic weights but require manual tuning of the time constant in training. Furthermore, the time constant is typically chosen to be the same for all neurons, which limits the diversity of neurons and, thus, the expressiveness of the LIF-based SNNs. Recently, Fang et al. proposed to train the weights along with the time constant through an advanced LIF model, i.e., parametric leaky integrate-and-fire (PLIF) [27]. Incorporating the learnable time constants through PLIF-based SNNs makes the network less sensitive to initial values and reduces the training time. **Systolic-Array SNN Accelerators:** SystolicSNNs exploit the spatial and temporal parallelisms for which binary spike input, logical 1 or 0 propagate vertically across the systolic array. As shown in Fig. 1, the spike input is first divided into multiple time steps and then, all input values in a time step are mapped on one row of the systolic array. The input binary spikes pass through a dense \(N\)x\(N\) grid of interconnected PEs in a clocked synchronized manner. The filter data is mapped and pre-stored in the PEs. Fig. 3a shows the design of a standard PE in systolicSNNs. The PE accumulates 32-bit weight inputs under 1-bit binary spikes on an enable signal. The adder needed for the accumulation operation in systolicSNNs is cheaper than the multiplier needed for the multiplier-and-accumulator (MAC) unit in systolic-array ANN accelerators [4][28]. The lack of multipliers renders systolicSNNs energy efficient in comparison to systolic-array ANN accelerators. The PEs employs an addition and subtraction selection unit also for processing signed weights. Furthermore, an internal counter helps in counting the number of spikes in the inference phase. ## III Motivational Case Study To motivate the proposed FalVolt mitigation method, we begin by empirically analyzing the impact of different threshold voltages on the classification accuracy of a faulty systolicSNN. To do so, we first train a PLIF-SNN with the MNIST and DVS128 Gesture datasets. Then, we inject the stuck-at faults using different fault maps for 30% and 60% PEs in a 256x256 systolicSNN. Next, we run paralleled retraining simulations with different threshold voltages. As shown in Fig. 2, we observe that changing the threshold voltage from 1.0 to 0.55 and 0.7 values in retraining leads to 99% classification accuracy with the MNIST dataset when even 30% and 60% PEs are faulty in a systolicSNN, respectively. However, retraining the same model with threshold voltage 0.45 and 0.5 leads to almost 73% and 60% accuracy loss when 30% and 60% PEs are faulty in a systolicSNN, respectively. In addition, 0.45 and 0.7 threshold voltages are most suitable for classifying the DVS128 Gesture dataset with a systolicSNN having 30% and 60% faults in PEs, respectively. However, retraining the same model with threshold voltages 0.7 and 0.5 leads to almost 60% and 55% accuracy loss when 30% and 60% PEs are faulty in a systolicSNN, respectively. Thus, selecting an appropriate threshold voltage for retraining the systolicSNN with high classification accuracy is imperative. Nevertheless, finding a suitable threshold voltage requires extensive retraining simulations, which may incur a significant amount of time. Motivated by this, we propose a novel fault-aware threshold voltage optimization technique in retraining for fault mitigation. Figure 1: A systolicSNN with faulty processing elements (PEs) in red color and non-faulty PEs in white color ## IV Proposed Fault-Aware Threshold Voltage Optimization (FalVolt) Our proposed FalVolt mitigation method improves the reliability of systolicSNNs by first setting the input pre-trained weights which map to the faulty PEs as zero. The fault locations are determined through post-fabrication tests on a systolicSNN chip. This initial step is similar to bypassing a PE using a multiplexer at the hardware level, as shown in Fig. 2(b), in systolicSNNs. With the bypass path enabled, the contribution of the faulty PEs to the column sum is skipped. However, bypassing single faulty PE may result in the pruning of multiple pre-trained weights due to the reuse of systolicSNNs in the data processing. Therefore, FalVolt next retrains the unpruned weights while optimizing the threshold voltage for each layer. The threshold voltage optimization saves the retraining time by eliminating the need for an exhaustive search for an appropriate threshold voltage. It makes SNN less sensitive to initial values and enhances and speeds up the learning. The optimized threshold voltage is used for all neurons in a layer to reduce the retrainable parameters and time. FalVolt optimizes the weights using the recursive gradient computations during both initial training and retraining. The weights mapped to faulty PEs are set as zero at the end of every retraining epoch. However, the threshold voltage is optimized for each layer during the retraining only, as discussed below: Lets consider \(\mathbf{r}\) as a ratio between the membrane potential \(v\) and threshold voltage \(\overline{\mathbb{V}}\). A neuron fires an output spike \(\mathbf{o}\) when \(v\) exceeds \(\overline{\mathbb{V}}\). Mathematically, this can be written as: \[\mathbf{z}_{l}^{t}=\mathbf{r}_{l}^{t}-1\ \ and\ \ \mathbf{o}_{l}^{t}=\begin{cases}1,& \text{if }\mathbf{z}_{l}^{t}>0.\\ 0,&\text{otherwise.}\end{cases} \tag{1}\] Here, the notation \(\mathbf{x}_{l}^{t}\) represents the parameters of SNN in the 1-th layer of the network at time step \(t\). The discontinuous gradient \(\frac{\partial\mathbf{o}}{\partial\mathbf{z}_{l}^{t}}\) is approximated with the surrogate function during error-backpropagation in retraining, similar to initial training. The term \(\frac{\partial\mathbf{o}_{l}^{t}}{\partial\mathbf{z}_{l}^{t}}\) is expressed mathematically as: \[\frac{\partial\mathbf{o}_{l}^{t}}{\partial\mathbf{z}_{l}^{t}}=\gamma\max(0,1-| \mathbf{z}_{l}^{t}|) \tag{2}\] where \(\gamma\) is a constant denoting the maximum value of the surrogate function. During backpropagation, the threshold voltage \(\overline{\mathbb{V}}\) is updated for layer \(l\) as follows: \[\overline{\mathbb{V}}_{l}=\overline{\mathbb{V}}_{l-1}\ -\eta\ \Delta\overline{ \mathbb{V}} \tag{3}\] where \(\eta\) represents the learning rate. Here, the gradient of threshold voltage \(\Delta\overline{\mathbb{V}}\) for layer \(l\) can be computed as: \[\Delta\overline{\mathbb{V}}_{l}=\frac{\partial L}{\partial\overline{\mathbb{V }}_{l}}=\sum_{t=0}^{T-1}\frac{\partial L}{\partial\mathbf{o}_{l}^{t}}\frac{ \partial\mathbf{o}}{\partial\mathbf{z}_{l}^{t}}\frac{\partial\mathbf{z}}{ \partial\overline{\mathbb{V}}_{l}}=\sum_{t=0}^{T-1}\frac{\partial L}{\partial \mathbf{o}_{l}^{t}}\frac{\partial\mathbf{o}}{\partial\mathbf{z}_{l}^{t}}( \frac{-\overline{\mathbb{V}}_{l}\mathbf{o}_{l}^{t-1}-v_{l}^{t}}{\overline{ \mathbb{V}}_{l}^{2}}) \tag{4}\] where \(L\) represents the cross entropy loss function defined by the mean square error. Algorithm 1 delineates the proposed FalVolt mitigation method. Lines 1-2 prunes the pre-trained weights mapped to the faulty PEs in systolicSNNs. Line 3 initializes the heavy step function \(\theta\) and \(\overline{\mathbb{V}}\). Lines 4-5 computes the un-pruned weights and \(\overline{\mathbb{V}}\) with multiple epochs in backpropagation. The un-pruned weights and \(\overline{\mathbb{V}}\) are optimized in each time-step for every layer in the PLIF-SNN, while calculating the gradient of loss function (\(\Delta L\)) in Line 10-11. Line 13 set the weights mapped to faulty PEs as zero at the end of each training epoch. It is interesting to note that setting the re-training epochs to zero makes the FalVolt Figure 3: Processing element with actual and bypassed circuitry Figure 2: Stuck-at fault mitigation using different threshold voltages (\(V_{th}\)), 30% and 60% of the total PEs are faulty in a 256x256 systolic-array SNN accelerator (systolicSNN) equivalent to simple fault-aware pruning (FaP). FalVolt returns new optimized values for the unpruned weights (or the re-trained model), \(\overline{\mathbb{V}}\) for each layer and the improved classification accuracy. Note, the proposed mitigation needs to be performed once only for the fabricated chip based on its unique fault map and thus, helps in avoiding the re-fabrication cost of the chips. ## V Results and Discussions This section discusses the results obtained from the fault vulnerability and mitigation analysis of systolicSNNs. ### _Datasets and network architectures_ We adopted a static MNIST [24], and two neuromorphic N-MNIST [25] and DVS128 Gesture [26] datasets in this paper. Note that the SNN research community widely uses these datasets for evaluating the performance of SNNs [29, 16]. As a classifier for N-MNIST and MNIST datasets, we use a PLIF-based SNN with two times repeated set of convolutional, batch normalization, spiking neurons, and pooling layers and also, two times a set of dropout, fully connected, and spiking neurons layers. The former set is repeated five times with the same architecture configuration in the classifier for the DVS128 Gesture dataset. Furthermore, an additional set of convolutional layer and spiking neurons layer is used for spike encoding the input images, inspired by [30], in these architectures. We use the initialization parameters from [27] to achieve the baseline accuracy i.e., 99% for the MNIST [24] and N-MNIST [25] datasets, and 97% for DVS128 Gesture [26] dataset, prior to fault injection in the inference phase. For systolicSNN inference, we developed a 256x256 grid of PEs in VHDL with bypass circuitry that incurs only 8% area overhead. ### _Simulation Methodology_ Fig.4 illustrates the tool-flow used for fault vulnerability and mitigation analysis in this paper. First, the SNN models are trained with their baseline accuracies. Next, the stuck-at faults are injected into the accumulator outputs of PEs using different fault maps. Then, the fault pruning is applied by setting the weights mapped to the faulty PEs as zero. Finally, fault mitigation through re-training with layer-wise threshold voltage optimization is employed using Algorithm 1. All simulations are conducted using NVIDIA GeForce RTX 2080 Ti GPU on Intel Core i9-10900kF operating at 3.06 GHz with 32 GB RAM. ### _Fault vulnerability analysis_ To investigate the stuck-at faults vulnerability in systolicSNNs, we extensively analyze their impact by varying the location of fault bits, the number of faulty PEs, and the size of the systolic array as follows. Varying location of fault bitsBefore running extensive simulations for fault mitigation, we first identify the most vulnerable bits to the stuck-at faults in the PEs of a 256x256 systolicSNN. For this purpose, we generate the fault maps such that the stuck-at 0 and stuck-at 1 faults are injected in different output bit positions of the accumulator inside the PEs. Note, fault injection with fault maps is a common practice for analyzing the fault vulnerabilities in systolic arrays [31]. Fault maps can be generated using post-fabrication testing in a real-world scenario. It is worth mentioning that we inject faults in the output of the accumulator, which is the main arithmetic component of the PEs. As shown in Fig. 4(a), our analysis reveals that stuck-at faults in most significant bits (MSBs) affect the classification accuracy more than the stuck-at faults in the least significant bits (LSBs). The reason is that the systolic array is reused for different layers; therefore, a single unmasked fault in a PE of a particular layer affects all the connected nodes in the subsequent layers, decreasing the overall classification accuracy. We also observe that a stuck-at 1 fault in MSB causes almost 80% accuracy loss, which is higher than the same fault in LSB when classifying the MNIST, N-MNIST, and DVS128 Gesture datasets. It is worth noticing that stuck-at 1 faults are more perturbing than stuck-at 0 faults in systolicSNN, similar to systolic array ANN accelerators [20]. Varying number of faulty PEsNext, we perform the fault simulations by considering a random distribution of the stuck-at faults across a 256x256 systolicSNN. We vary the fault rates by varying the number of faulty PEs in each experiment and running each experiment 8 times. The number of faulty PEs stays the same for all iterations in an experiment. Furthermore, each iteration uses a distinct fault map. In the following section, the faults are injected in the higher-order bits (i.e., MSBs) of the accumulator outputs in PEs to perform the worst-case analysis. Moreover, the average classification accuracies for all iterations in an experiment are recorded. As shown in Fig. 4(b), our results demonstrate that _even 8 faulty PEs (i.e., 0.012% of total PEs)_ can lead to an accuracy drop from 99% to 50%, 99% to 47% and 97% to 44% in the MNIST, N-MNIST and DVS128 Gesture classification, respectively. Hence, the classification of both static and neuromorphic datasets is prone to stuck-at faults. Varying size of the systolic arrayFor further extensive fault vulnerability study, we analyze the impact of stuck-at faults across different sizes of _N_x_N systolic arrays i.e., 4x4, 8x8, 16x16, 32x32 and 64x64. As shown in Fig. 4(c), our analysis reveals that stuck-at faults in a small-sized systolic array cause more accuracy loss as compared to a large-sized systolic array. For example, 4 faulty PEs units in an 8x8 systolic array (having 16 PEs) lead to 89%, 92% and 93% accuracy loss in the MNIST, N-MNIST and DVS128 Gesture classification, respectively. However, SNN classification with a 256x256 systolic array, having the same fault configuration, results in almost 16%, 17%, and 33% accuracy loss only. This is due to the fact that decreasing the size of the systolic array increases its chances for re-usability and hence, the reoccurrence of the permanent faults in every execution cycle. Our analysis shows that DVS128 Gesture is more vulnerable Fig. 4: Experimental setup and tool flow to faults when compa red to the MNIST and N-MNIST datasets, even though their baseline accuracies are the same. As shown in Fig. 4(b), the classification accuracy of DVS128 Gesture remains comparatively lower than other datasets in the presence of stuck-at faults. Also, the accuracy loss associated with the DVS128 Gesture dataset is comparatively higher than other datasets in Fig. 4(c). However, a higher number of stuck-at faults can render performance penalties unacceptable in all cases. ### _Fault mitigation analysis_ In this section, we study the performance of FalVolt and compare it with the state-of-the-art techniques common for ANNs. Specifically, we compare FalVolt with fault-aware pruning (FAP) and fault-aware pruning with retraining without threshold voltage optimization (FaPIT). Classification accuracy vs. fault ratesFor the fault mitigation analysis, we inject the stuck-at faults using different fault maps in 10%, 30%, and 60% PEs of a 256x256 systolicSNN and run paralleled re-training simulations. We employ the proposed FalVolt mitigation method using Algorithm 1 for 10%, 30%, and 60% PEs in a 256x256 systolicSNN. Our analysis shows that optimizing threshold voltage for each hidden convolutional and fully connected layer helps in achieving baseline accuracy. Fig. 6 shows the optimized threshold voltage returned from the FalVolt mitigation method for each hidden layer to achieve the baseline accuracy for MNIST, NMNIST, and DVS128 Gesture datasets. For all these datasets, the optimized threshold voltage for the initial spiking-convolutional and spiking-fully connected layers is higher than other layers to ensure that the redundant spikes do not travel to the output layer. Fig. 7 compares the FalVolt mitigation method with FaP and FaPIT. We observe that an increased fault rate causes a rapid accuracy loss in the FaP. FaPIT and FalVolt help in improving classification accuracy. However, only FalVolt achieves the baseline classification accuracy in the MNIST, N-MNIST, and DVS128 Gesture classification with even 60% of the faulty PEs. This validates the applicability of FalVolt to both static and neuromorphic datasets. Classification accuracy vs. number of epochs: FalVolt increases the classification accuracy at the cost of additional retraining epochs to FaP; however, they are negligible compared to the lifetime of systolicSNNs. As shown in Fig. 8, FaPVolt is 2x faster than FaPIT. For example, the classification accuracy of MNIST is as high as 80% with FaPIT using 20 epochs and converges with baseline accuracy around 25 epochs. However, the same dataset achieves the baseline accuracy with FalVolt in 10 epochs, as shown in Fig. 7(a). Likewise, FalVolt achieves the baseline accuracy of NMNIST classification 2x less number of epochs when compared to FaPIT as shown in Fig. 7(b). Moreover, the classification accuracy of DVS128 Gesture is as high as 83% with FaPIT using 40 epochs and converges with baseline accuracy around 50 epochs as shown in Fig. 7(c). However, the same dataset achieves 97% accuracy with FalVolt around 25 epochs. Since a small change in the baseline accuracy may cause catastrophic issues in safety-critical applications; therefore, the epochs for initial training, FaPIT, and FalVolt algorithms are high to achieve the classification accuracy close to the baseline. Note, training the large-sized SNNs itself takes a long time (or a higher number of epochs). ## VI Conclusion This paper extensively analyzes the stuck-at fault vulnerabilities of systolicSNNs and proposes a novel fault mitigation technique _'fault-aware retraining through threshold voltage optimization (FalVolt).'_ FalVolt uses an optimized threshold voltage and time steps different from initial training to achieve classification accuracy close to the baseline. To demonstrate the effectiveness of FalVolt, we classify the MNIST, N-MNIST, and DVS128 Gesture datasets on a 256x256 systolicSNN Figure 5: Stuck-at fault vulnerability analysis of a 256x256 systolic-array based SNN accelerator (systolicSNN). Figure 6: Optimized threshold voltage for hidden convolutional and fully connected layers using FalVolt, when 0%, 10%, 30% and 60% of the total PEs are faulty in a 256x256 systolic-array SNN accelerator (systolicSNN) while injecting faults at different rates. Our results show that even 0.012% faulty PEs in a systolicSNN leads to significant accuracy loss. However, FalVolt improves the performance of systolicSNNs by enabling them to operate at fault rates of up to 60%, with a negligible drop in the classification accuracy (as low as 0.1%). Furthermore, our results show that FalVolt is 2x faster when compared to state-of-the-art techniques, such as fault-aware pruning without threshold voltage optimization.
2302.11025
Asteroseismology of $δ$ Scuti stars: emulating model grids using a neural network
Young $\delta$ Scuti stars have proven to be valuable asteroseismic targets but obtaining robust uncertainties on their inferred properties is challenging. We aim to quantify the random uncertainties in grid-based modelling of $\delta$ Sct stars. We apply Bayesian inference using nested sampling and a neural network emulator of stellar models, testing our method on both simulated and real stars. Based on results from simulated stars we demonstrate that our method can recover plausible posterior probability density estimates while accounting for both the random uncertainty from the observations and neural network emulation. We find that the posterior distributions of the fundamental parameters can be significantly non-Gaussian, multi-modal, and have strong covariance. We conclude that our method reliably estimates the random uncertainty in the modelling of $\delta$ Sct stars and paves the way for the investigation and quantification of the systematic uncertainty.
Owen J. Scutt, Simon J. Murphy, Martin B. Nielsen, Guy R. Davies, Timothy R. Bedding, Alexander J. Lyttle
2023-02-21T21:47:34Z
http://arxiv.org/abs/2302.11025v2
# Asteroseismology of \(\delta\) Scuti stars: emulating model grids using a neural network ###### Abstract Young \(\delta\) Scuti stars have proven to be valuable asteroseismic targets but obtaining robust uncertainties on their inferred properties is challenging. We aim to quantify the random uncertainties in grid-based modelling of \(\delta\) Sct stars. We apply Bayesian inference using nested sampling and a neural network emulator of stellar models, testing our method on both simulated and real stars. Based on results from simulated stars we demonstrate that our method can recover plausible posterior probability density estimates while accounting for both the random uncertainty from the observations and neural network emulation. We find that the posterior distributions of the fundamental parameters can be significantly non-Gaussian, multi-modal, and have strong covariance. We conclude that our method reliably estimates the random uncertainty in the modelling of \(\delta\) Sct stars and paves the way for the investigation and quantification of the systematic uncertainty. keywords: asteroseismology - stars: variables: Scuti - stars: fundamental parameters - methods: data analysis - methods: statistical ## 1 Introduction Stellar ages for individual stars are notoriously difficult to measure (Soderblom, 2010). One method is to model a cluster with isochrones, which is particularly sensitive to high-mass stars at the main-sequence (MS) turn-off (Lipatov et al., 2022). Other techniques, such as the lithium depletion boundary (e.g. Galindo-Guil et al., 2022) or kinematics (Miret-Roig et al., 2022; Zerjal et al., 2023), are able to use low-mass stars, which are much more abundant. However, methods that utilize intermediate-mass stars for measuring stellar ages have been lacking. Asteroseismology - the study of stellar oscillations - is highly sensitive to age and has long held promise as an independent method for age determination (e.g., Aerts, 2015). Like other techniques, asteroseismology is model-dependent, but the physics of those models is generally different from the high- and low-mass stars (Soderblom, 2010), hence the techniques are highly complementary (Kerr et al., 2022, 2022). Until recently, however, asteroseismology of intermediate-mass stars (the so-called \(\delta\) Scuti variables) has been hampered by the difficulties in identifying which modes are excited. The discovery of regular patterns in the pulsation mode frequencies of some \(\delta\) Sct stars (Bedding et al., 2020) has opened up a pathway to determine their masses, ages, and metallicities, without the requirement that the star resides in a cluster or association. In recent years, oscillations in large numbers of \(\delta\) Sct stars have been measured using white-light photometry from space telescopes such as _CoRoT_(e.g., Paparo et al., 2013; Michel et al., 2017; Barcelo Forteza et al., 2018), _Kepler_(e.g., Uytterhoeven et al., 2011; Balona et al., 2015; Garcia Hernandez et al., 2017; Bowman and Kurtz, 2018; Guzik, 2021) and _TESS_(e.g., Antoci et al., 2019; Hasanzadeh et al., 2021; Barac et al., 2022; Chen et al., 2022). Observed oscillation frequencies can be compared against grids of model frequencies to find a best-fitting set of parameters (Murphy et al., 2021, 2022). It is somewhat more challenging to understand the resulting uncertainties, which are not uniquely determined by the spacing of the model grid (Pedersen, 2020), and instead depend more strongly on the underlying physics (Steindl et al., 2021). Part of the challenge is that models can be computationally expensive and calculating new evolutionary tracks on-the-fly for Monte Carlo sampling is prohibitive. In order to treat the uncertainties more robustly, we aim to convert a discrete grid of stellar models into a continuous function. We use a neural network to emulate a grid of stellar models that has been pre-computed over the range of expected stellar parameters. We combine the trained neural network with a Bayesian sampler to formally treat random uncertainties in the observables. This yields estimates for the posterior probability density of the fundamental properties which quantifies their uncertainties. It also allows us to infer viable frequencies for modes that were not detected, but which might exist in the data at low signal-to-noise. In the following section we describe the grid of stellar models on which the neural network is trained, and in Sec. 3 we discuss the details of the network architecture and training method. In Sec. 4 we present the method used to perform the Bayesian inference, and show results for a selection of simulated and real sets of observations (Sec. 5). ## 2 The Stellar Model Grid We used the model grid of Murphy et al. (in prep), consisting of evolutionary tracks computed with MESA (r15140; Paxton et al., 2011, 2013, 2015, 2018, 2019) and pulsation models calculated with GYRE (v6.0.1; Townsend & Teitler, 2013). Provisional versions of this grid have already been used to model the pulsations of \(\delta\) Sct stars (Murphy et al., 2022; Kerr et al., 2022a,b; Currie et al., 2022), and the physics of the models are described in Murphy et al. (2022). A well-sampled grid was needed to train the neural network emulator. Here, evolutionary tracks were spaced by \(0.02\,\mathrm{M}_{\odot}\) in mass \(M\) and \(0.001\) in initial metallicity \(Z_{\mathrm{in}}\). For \(Z_{\mathrm{in}}>0.010\), the spacing was increased to \(0.002\). The grid is shown in mass-metallicity space in Fig. 1. A common problem in MESA is that pre-MS models sometimes fail to converge and the evolution is terminated (see, e.g., Steindl et al., 2021). In such cases, we attempted to re-calculate the track with a slightly increased mass (\(M\)+= 0.001) up to five times before abandoning that track. Abandoned tracks appear as gaps in the grid in Fig. 1. It is also important to ensure the tracks are sampled well in age. Computational errors are minimised by keeping the time interval small throughout the evolution, even if not all time steps are saved as outputs. The internal sampling is described in Murphy et al. (in prep). For outputs, we saved evolutionary and pulsation models every \(0.05\,\mathrm{Myr}\) from \(2\,\mathrm{Myr}\) until \(10.5\,\mathrm{Myr}\), in order to adequately sample the rapid evolutionary changes that occur on the pre-MS. After this the evolution is somewhat slower, and sampling of \(3\,\mathrm{Myr}\) was deemed adequate up to ages of \(40\,\mathrm{Myr}\). Beyond this, the tracks were instead sampled according to changes in position on the HR diagram (limits of \(\Delta\log T_{\mathrm{eff}}=0.0006\) and \(\Delta\log L=0.002\)), with an upper limit of \(100\,\mathrm{Myr}\) between samples. Where large gaps occurred in the grid, or when the specific \(M\)-\(Z_{\mathrm{in}}\) combination demanded it, we manually recalculated tracks with finer sampling. This explains the variations in the number of samples per track in Fig. 1. For each pulsation model, we computed the frequencies of radial modes (spherical degree \(\ell=0\)) having radial orders \(n\) from \(1\) to \(11\), and dipole (\(\ell=1\)) modes having \(n\sim 1\)-\(10\). This encompasses the range of radial orders observed for real stars (e.g. Bedding et al., 2022). We calculated the mean frequency separation between radial orders (\(\Delta\nu\)) using the radial modes having \(n=5\)-\(9\), by fitting a straight line to the mode frequencies as a function of \(n\)(see White et al., 2011), using \[\nu=\Delta\nu(n+\ell/2+\epsilon). \tag{1}\] The variable \(\epsilon\) is the intercept of that line with the y-axis, and describes the distance of the radial mode ridge from the y-axis in an echelle diagram. In addition to the individual mode frequencies, we stored the values of \(\Delta\nu\) and \(\epsilon\) for each model in the grid, since these asteroseismic quantities relate to astrophysical quantities (Murphy et al. in prep.). To reduce the effect of the strong covariance between stellar age \(\tau\) and mass \(M\), and ease the training of the neural network, we used the assumption that the MS lifetime is approximately proportional to \(M^{-3.2}\) and defined the scaled age (e.g., Davies & Miglio, 2016) \[\mathcal{K}=10^{-4}\,\tau\,(M/\mathrm{M}_{\odot})^{3.2}. \tag{2}\] This scaled age serves as an estimate of the fractional MS age of our models. ## 3 Constructing the Neural Network To overcome the discretely sampled nature of the model grid, we used a neural network consisting of a series of fully connected dense layers in place of standard interpolation for continuous stellar model emulation. The network was trained on the model grid, learning to predict observable parameters given stellar model input parameters. This way, the network learned the map from observables to model parameters and could be used for likelihood estimation during inference. To this end, we used the fundamental parameters \(M\), \(Z_{\mathrm{in}}\) and scaled age (\(\mathcal{K}\)) as inputs for parameter augmentation and network training. Outputs consist of the classical observables (\(L\) and \(T_{\mathrm{eff}}\)); asteroseismic quantities (\(\Delta\nu\) and \(\epsilon\)); and \(11\) radial and \(10\) dipole mode frequencies. We refer to these \(25\) outputs collectively as the 'observable parameters'. Once the input and output parameters were defined, we carried out dataset-wide parameter augmentation to improve the training of the network. We converted all parameters (excluding \(\epsilon\)) to the decimal logarithm and applied a Z-score standardisation to all parameters (including \(\epsilon\)). Both of these operations restricted all parameters to similar ranges, to avoid the neural network assigning erroneously high importance to parameters spanning several orders of magnitude during training. We found the combination of the two operations to be optimal for this investigation. To further simplify the training process, we performed principal component analysis on the observable parameters in the model grid, as follows. For all models, we calculated the covariance matrix of all \(25\) observable parameters. The eigenvectors of the resulting covariance matrix, or 'principal components', were ranked in order of descending eigenvalue, returning a list of principal components explaining the most to the least variance in the observable parameters. We determined how many principal components to include using the explained variance ratio, which describes the percentage of the variance of the observable space present in just the chosen principal components. We found that \(9\) principal components explained all but \(10^{-4}\) per cent of the total variance. This sufficiently explained the covariance of the \(25\) parameters in the full observable space. The use of principal components presents the neural network with a simpler map to learn--replacing the fundamental parameters by the reduced dimensions of the 'latent parameters'--and also removes covariance information from the observables, which is redundant for the neural network. We then added a custom non-trainable layer to the Figure 1: The model grid used in this work, where each symbol represents an evolutionary track with a particular metallicity (\(Z_{\mathrm{in}}\)) and mass. Colour-coding indicates the number of models along each track for which pulsation frequencies were calculated (see Sec. 2). neural network, which projects the latent parameters back into the full observable parameter space before the network outputs predictions. Finally, we split the model grid into a training and a testing set for the neural network. The training set was randomly selected to comprise 80 per cent of the model grid, to be seen by the network during training. The test set, composed of the remaining 20 per cent of the grid, was unseen by the network during the training process and was used solely for evaluation of network prediction performance. This served as a check that the network is capable of model grid interpolation -- the training set became a sparser representation of the original model grid, with the test set providing models guaranteed to hold combinations of parameters previously unknown to the network. In addition to the data augmentation prior to training, the hyperparameters of a neural network can be tuned to promote faster and more stable learning. To quantify network performance for comparison between different hyperparameter permutations, we compared their validation loss profiles over multiple network training sessions. We adopted a 'grid search' method for testing potential combinations of network hyperparameters. This involved creating a grid of potential values for the number of fully connected dense layers (ranging from 3 to 8 in steps of 1), activation functions, optimizers, learning rates, loss functions, and batch sizes. We populated a grid with these hyperparameters, and then tested the resulting network at each position in the hyperparameter grid for successful validation loss minimisation. We found the optimal network consisted of 6 fully-connected dense layers of 64 neurons, each using an exponential linear unit activation function (Clevert et al., 2015), followed by the custom layer for projection from latent to observable parameters, and a final dense output layer with linear activation function. We used the Adam optimizer (Kingma and Ba, 2014) with a learning rate of \(10^{-4}\), and the mean-squared-error loss function. A batch size of \(6\times 10^{4}\) models provided a good compromise between speed and training stability. We used a validation split of 25 per cent of the training set -- where the test set is used to evaluate neural network success after training, a 'validation set' is used for evaluation of neural network success during training. Once primary training was complete with the learning rate above, we saved and recompiled the network with a slower but less volatile learning rate of \(7\times 10^{-5}\), and restarted training until no validation loss reduction was observed for \(10^{4}\) training epochs. The network and custom latent-to-observable projection layer were constructed using the TensorFlow sequential API (Abadi et al., 2015). Once the optimal network from the grid search was trained, we evaluated the network performance across the full observable parameter space. Using the test set previously removed from the model grid, we plotted distributions of the decimal logarithm prediction residuals for each parameter. This allowed us to visualize any bias and uncertainty inherent in the neural network predictions. We used the median absolute deviation of these prediction residual distributions, shown in Fig 2, to quantify network prediction uncertainty for observable parameters. We found an uncertainty in network predictions of \(8\times 10^{-4}\) dex and \(2\times 10^{-4}\) dex for log \(L\) and \(\log T_{\text{eff}}\), respectively, and a mean uncertainty of \(\sim 3\times 10^{-4}\) dex for the mode frequencies. ## 4 Inferring the fundamental stellar parameters To perform the Bayesian inference on the input model parameters, \(\theta=(M,\log Z_{\text{in}},\log\mathcal{K})\), for a given observed set of mode frequencies, we sampled the posterior distribution \[P(\theta|D)=\frac{\mathcal{P}(\theta)\mathcal{L}(D|\theta)}{\mathcal{E}(D)}. \tag{3}\] Here, \(\mathcal{P}(\theta)\) is the prior on the input model parameters, and \(\mathcal{L}(D|\theta)\) is the likelihood of observing a set of parameters (\(D\)) given the model parameters. The evidence, \(\mathcal{E}(D)\), is calculated at each step during the sampling. In addition to the input model parameters, we also included a variable offset term, \(\Delta n\), as input to account for the possible ambiguity in assigning the radial orders of the observed modes. This ambiguity arises because the radial orders of a set of modes cannot always be determined from the observed mode frequencies alone, and are typically decided by comparison to stellar models. Including \(\Delta n\) in the sampling allows us to marginalize over this uncertainty when estimating the posterior distribution of the input model parameters. We expect this uncertainty to only lead to an error of \(\pm 1\) radial order, and so we chose the prior on \(\Delta n\) to be a set of \(\delta\)-functions at \(\Delta n=-1,0\) and \(1\). Table 1 lists a summary of the prior functions. For the priors on \(M\), log \(Z_{\text{in}}\) and log \(\mathcal{K}\), we chose to use \(\beta\) distributions, since they can be bounded to match the limits of the stellar model grid. In addition, the shape parameters of the \(\beta\) distributions may be chosen such that the priors reflect our expectation of the distribution of real observations of \(M\), \(Z_{\text{in}}\) and \(\mathcal{K}\). Our choice of range and shape of the prior on \(\mathcal{K}\) is motivated by the age range and distribution we expect to target, and also be able to observe. Mode identification for \(\delta\) Sct stars is currently possible up to approximately one third of the MS age, after which the coupling between the buoyancy dominated and acoustic modes spoils the regular mode frequency patterns. Furthermore, at older ages the physics of mixing and overshooting become more important, and those were not treated as variables in the model grid. Hence, the prior on age extends to approximately one third of the expected MS lifetime. The lower limit on the prior on the scaled age was chosen because stars in our mass range of interest (see below) do not evolve to cross the \(\delta\) Sct instability strip until ages \(\geq 2\) Myr. Due to the motion of stars through the \(\delta\) Scitnstability strip, the age prior is biased toward lower ages, with a fall-off in the age prior distribution toward older stars. The prior on \(Z_{\text{in}}\) ranges from approximately 0.07 to 1.5 times the solar metal mass fraction of 1.42 per cent used in the models (Asplund et al., 2009), which covers the metallicity distribution of stars forming within approximately 1 kpc of the Sun at the current age of the Galaxy (Hayden et al., 2020). The existence of metal-poor \(\delta\) Sct stars in modern star-forming regions (e.g. HD 139614 in Upper Centaurus Lupus; Murphy et al., 2021) suggests that slightly sub-solar metallicities are more common than slightly super-solar metallicities in young \(\delta\) Sct stars. We therefore skewed the prior probability density toward sub-solar values. Finally, the mass range was chosen to ensure that models exist both within and on either side of the instability strip (Dupret et al., 2004; Murphy et al., 2019). Our slight skew towards lower masses accounts for the similar skew present in the stellar initial mass function (Krumholz, 2014). Figure 3 shows samples drawn from these prior density distributions, both in terms of the sampled variables and those transformed to \(M\), \(Z_{\text{in}}\) and age. These priors are applied to the inference performed for all targets (see below). Additional priors may be applied on a target-by-target basis if, for example, the mass can be constrained by other sources such as orbiting companions, or limits can be placed on the metallicity by spectroscopy. For each of the samples drawn from the prior distributions, the neural network produces the following outputs: a set of mode frequencies, the effective temperature, and the luminosity. Given a set of outputs we then evaluated the likelihood of the observations by \[\log\mathcal{L}\left(D|\theta\right)=\log\mathcal{L}(D_{\mathrm{S}}|\theta)+\log \mathcal{L}(D_{\mathrm{C}}|\theta). \tag{4}\] We separate the log-likelihood into the seismic variables, \(D_{\mathrm{S}}\), and the classical (non-seismic) variables, \(D_{\mathrm{C}}\). The contribution to the likelihood of the mode frequencies is given by \[\log\mathcal{L}(D_{\mathrm{S}}|\theta)=\sum_{i}\log\mathcal{N}\left\langle v_{ i}^{\mathrm{obs}},\sqrt{\sigma_{v_{i}^{\mathrm{obs}}}^{2}+\sigma_{v_{i}^{ \mathrm{SN}}}^{2}}\right\rangle, \tag{5}\] and that of the classical observables is given by \[\log\mathcal{L}(D_{\mathrm{C}}|\theta)= \log\mathcal{N}\left(\log L^{\mathrm{obs}},\sqrt{\sigma_{L^{ \mathrm{obs}}}^{2}+\sigma_{L^{\mathrm{SN}}}^{2}}\right)+\] \[\log\mathcal{N}\left(T_{\mathrm{eff}}^{\mathrm{obs}},\sqrt{ \sigma_{T^{\mathrm{obs}}}^{2}+\sigma_{T^{\mathrm{SN}}}^{2}}\right). \tag{6}\] The width of the probability densities used in the inference is given by two terms that specify the observational uncertainty (superscript 'obs'), and the noise due to the precision of the neural network's ability to emulate the model grid (superscript 'NN'). Based on the spread of the residuals presented in Fig. 2, this emulation uncertainty is approximately \(4\times 10^{-4}\) dex, which equates to a relative uncertainty of \(\approx 0.1\) per cent on the output parameters. This additional uncertainty was added in quadrature to the uncertainty of the observed mode frequencies. In the following we will use simulated frequencies corresponding to those obtained from 2 sectors of data from the _TESS_ mission (Ricker et al., 2015). We therefore adopted an uncertainty on the mode frequencies of \(0.02\mathrm{d}^{-1}\), which is the frequency resolution of the resulting power spectra. The uncertainties on \(\log L\) and \(\log T_{\mathrm{eff}}\) depend on the target in question, but for the simulations shown below these were fixed to \(\sigma_{L}=0.05\) dex and \(\sigma_{T}=200\)K. The neural network residuals also showed a bias of \(\sim 10^{-5}\) dex, which equates to an offset of \(0.01\) per cent on each of the output parameters. This offset is small compared to the combination of the observed and neural network uncertainties, and so we did not consider it in the analysis. However, if either of these sources of uncertainty were decreased by, for example, improving the estimates of the observed mode frequencies, the importance of this bias would need to be re-evaluated. We performed the sampling using the nested sampling method from the Dynesty Python package (Skilling, 2004; Speagle, 2020). Nested sampling determines iso-likelihood contours in the input parameter space, which were iteratively redefined until samples were consistently drawn around the global likelihood maximum. In the Dynesty package, this process is terminated when the change in model log-evidence \(\Delta\log\mathcal{E}\) is less than a predefined value chosen according to the Dynesty documentation1. The method presented above is not restricted to using Dynesty, and other sampling methods \begin{table} \begin{tabular}{c c} \hline Parameter & Prior function \\ \hline \(M\left[M_{\odot}\right]\) & \(\beta_{\mathrm{S}}^{2}(1.3,2.3)\) \\ \(\log Z_{\mathrm{in}}\) & \(\beta_{\mathrm{S}}^{2}(-3.1,-1.6)\) \\ \(\log\mathcal{K}\) & \(\beta_{\mathrm{L,2}}^{2}(-3,-0.3)\) \\ \(\Delta n\) & \(\epsilon_{R}^{2}\left\{-1,0,1\right\}\) \\ \hline \end{tabular} \end{table} Table 1: Prior density functions used in Eq. 3. The priors on \(M\), \(\log Z_{\mathrm{in}}\) and \(\mathcal{K}\) are given by \(\beta_{\mathrm{b}}^{\alpha}\), where \(a\) and \(b\) are the shape parameters of the \(\beta\)-distributions, and the prior on \(\Delta n\) is a series of \(\delta\)-functions at integer values. In all cases the arguments to the distribution functions denote lower and upper limits. Figure 2: Residual distributions of predictions from the neural network. The residuals are that of the decimal logarithms of \(T_{\mathrm{eff}}\), \(L\) (blue), and mode frequencies (\(n\), \(\ell\)) (orange). The boxes show a central line at the median value of the distribution, with edges at the lower and upper quartiles. Whiskers extend to the 5th and 95th percentile range. The dashed line indicates complete agreement between the network predictions and model grid values. may be used, such as MultiNest(Buchner et al., 2014) or EMCEE (Foreman-Mackey et al., 2013). ## 5 Results ### Simulated stars In order to test our methodology and validate the accuracy of the neural network emulator, we have performed tests based on 25 simulated stars in a 'hare-and-hounds' exercise. To produce these simulated stars, we proceeded as follows. Values of stellar mass and initial metallicity were selected to lie in between values in the grid, but still within the defined parameter range of the grid. We calculated stellar models and pulsation frequencies using MESA and GYRE, using the same settings as for the grid. We selected ages from the newly calculated tracks, which then defined the truth values for mass, metallicity and age for our simulated stars and their associated 'true' observables. We only selected a subset of the calculated modes, to better reflect typical observations of \(\delta\) Sct stars. We selected modes at four consecutive radial orders for each degree, within the bounds of \(n=1\)-\(8\). To simulate noisy observations, we added noise to the observable parameters of the simulated stars. These random offsets were drawn from a normal distribution, with mean of zero and a standard deviation of: \(0.02\,\mathrm{d}^{-1}\) for the mode frequencies, \(200\,\mathrm{K}\) for the effective temperatures, and \(0.05\,\mathrm{dex}\) for the log-luminosity. #### 5.1.1 Exemplar simulated star Figure 4 shows the posterior probability estimates for a simulated star with one of the best results. The figure demonstrates our ability to quantify random uncertainty on our inferred properties from the posterior and shows that the true properties of this simulated star lie comfortably with the posterior distribution. In this case our method is performing as required but it is worth noting that, even for this exemplar, the posterior still contains significant correlation between the parameters. We see that the posterior distribution is not well described by a series of separable 1-D normal distributions. Instead, there are strong covariances between inferred parameters, which are to be expected from stellar evolution theory. However, the 1-D marginal posterior distributions show evidence of not being normally distributed and, in the case of the stellar age, even somewhat multi-modal. To examine the degree of accuracy and precision of the results we will study the summary statistics of the posterior distribution. This does not capture all the detail that is of value, but is nonetheless useful as a test of our methods. #### 5.1.2 Results for 25 simulated stars While examining a single simulated star is useful, it is hard to draw conclusions on the validity of our approach because we are looking at a single realisation of noise on the observables. We now consider all 25 simulated stars, including the exemplar above (simulated star index 5), to look at the statistics of our posterior probability distributions when compared to the truth values of the input properties. As part of our method, we fitted a parameter to account for our uncertainty in our assumption of the radial order label \(n\). In our tests on simulated stars, we recovered the correct radial order in all cases with no meaningful uncertainty on the posterior of the radial order labels. For each parameter of each star, we computed the difference between the truth value and the inferred value (the mean of the posterior samples for that parameter) divided by the uncertainty (the standard deviation of the posterior samples for that parameter). If our inference is perfect, and our posterior distributions are well behaved, then this metric should be drawn from a normal distribution with zero mean and unit variance. However, as observed above in our exemplar, multi modality, non-Gaussianity, and other pathological behaviour in the posterior can bias our metrics away from our assumed normal distribution. Figure 5 shows the metric for each simulated star and each input property of the star. It is clear that the majority of our simulated star results are consistent with the truth value given the uncertainty. And broadly, the numbers of metrics at the 1 and 2 sigma levels are consistent with expectations. There are however some outliers or results which we will discuss. The index for the most significant outlier in terms of metallicity is 14 and an interesting behaviour in the age distribution is observed in simulated star index 21. #### 5.1.3 Further tests of simulated star 14 Simulated star 14 appears as an outlier in metallicity by \(\sim 2.5\sigma\). To examine this behaviour we have produced 10 more realisations of this simulated star. That is, we have taken the same truth values as inputs, but redrawn the simulated noise on each observable parameter using the same noise distributions. Figure 6 shows the posterior samples for the original and subsequent runs for simulated star 14. Firstly, it is clear that the posterior for \(Z_{\mathrm{in}}\)is multi-modal and contains significant covariance. Secondly, there is a bias of the posterior distributions away from the truth value in both metallicity and age that cannot be explained simply as a result of the realisation noise on the observables. We have checked for the possible origins of this bias. We examined the prior probability distribution, but found it to be smooth and nearly flat over the region of the posterior. We have performed multiple realisations of the noise and still observed this bias and therefore also exclude the noise or likelihood as the source of the bias. A possible source of error is in the neural network emulation producing differences in the predicted mode frequencies. While these errors are typically small, of order \(3\,\times\,10^{-4}\) dex, the noise from the neural network is not random noise that would be expected to reduce with more realisations of the observables. Instead, the error is systematic and will produce a bias. The systematic error will always be present in emulation and this will lead to a bias, but it is the magnitude of this bias that is interesting. For this simulated star, the bias is similar to the reported uncertainty, which is about \(1.5\,\mathrm{Myr}\). However, this error can be reduced by extending the training time of the neural network, or increasing the grid search density around the optimal neural network architecture. #### 5.1.4 Further tests of simulated star 21 Simulated star 21 shows an interesting behaviour in the age posterior. Figure 7 shows the posteriors for the original simulated star 21 and for 10 more realisations, as we did for simulated star 14 above. No meaningful bias is observed in the posteriors for mass or metallicity, given the priors we apply. The true age of the simulated star was \(12.96\,\mathrm{Myr}\), which corresponds to the pre-MS evolution stage. The age posterior is clearly bimodal, with solutions around \(10\,\mathrm{Myr}\) and \(160\,\mathrm{Myr}\). This behaviour is observed in all the realisations, lending confidence that this is not a result of the noise being added. In fact, this bi-modality is consistent with our understanding of the evolution of these stars and illustrates the difficulty of distinguishing the phase of the MS evolution where the track crosses its pre-MS evolution in the HR diagram. ### Application to HD 99506 We applied our methods to HD 99506, which is one of the high-frequency \(\delta\) Sct stars discussed by Bedding et al. (2020). We used the following inputs: \(T_{\rm eff}=7970\pm 250\) K and \(L\)/L\({}_{\odot}=7.58\pm 0.37\) (taken from Table 1 of Bedding et al., 2020), and the mode frequencies that we have measured and listed in Table 2. We chose only the mode frequencies that were obvious, leaving out tentatively identified modes such as the \(n=4\) and \(n=9\) radial modes, and the \(n=1\) and \(n=8\) dipole modes. The identified modes span two radial orders more than any of the simulation and so, despite the gaps at some orders, they provide tighter constraints. The resulting posteriors on \(M\), \(Z_{\rm in}\) and age are unimodal and indicate a percent-level random uncertainty (Fig. 8). The inferred age (\(9.71\pm 0.31\) Myr) corresponds to the pre-MS phase, before the onset of pp-chain H-burning but after the temporary pre-MS CNO burning phase. The calculation of well-sampled posteriors for uncertainty estimates is a marked improvement on what is possible using discrete grid points and \(\chi^{2}\) minimisation (e.g. Kerr et al., 2022), where an arbitrary threshold in \(\chi^{2}\) needs to be adopted. It is especially useful that the posteriors are marginalized, given the aforementioned correlation in astrophysical parameters demonstrated with the simulated stars. The neural network is also able to generate posterior predictions for Figure 4: Samples drawn from the posterior distribution of simulated star 5. For clarity we transform the initial metallicity to % and \(\mathcal{K}\) to stellar age \(\tau\). The model input values used to generate the simulated star 5 data are shown in blue. Figure 3: Left: Samples drawn from the one-dimensional prior distributions of the sampled parameters. The diagonal frames show the marginalized distributions (black) and the functions used to draw the samples (orange). The prior used for \(\Delta n\) are \(\delta\)-functions at \(\Delta n=-1,0\), and \(1\). The off-diagonal frames show the two-dimensional distributions of the input parameters. Right: The samples from the left frames (black) transformed to the same units as the output from the stellar model grid (blue). each mode frequency, using the posterior samples as inputs. This can be useful for estimating the validity of uncertain mode identifications. In Fig. 9, we see that the leftmost (lower frequency) of two close peaks at the \(n=4\) radial mode is a good match and could perhaps be identified. The weak peak at \(n=9\) would also have fitted well. Inclusion of these would have resulted in tighter posteriors. On the other hand, none of the missing dipole mode frequencies, nor the \(n=1\) or 2 radial modes, would have been good additions. If we had supplied those modes as input, the posteriors would have broadened markedly. ## 6 Conclusions We have presented a method for performing Bayesian inference on fundamental stellar properties of \(\delta\) Sct stars using a neural network. This method emulates the stellar model and oscillation codes, MESA and GYRE, by learning from a grid of models that encompasses the physical properties of stars in or near the \(\delta\) Sct instability strip. We used a nested sampling method to estimate the posterior distribution of the fundamental stellar properties, given a set of mode frequencies and classical observables. The resulting posterior distribution reflects the random observational uncertainty as well as the uncertainty of the neural network. By providing samples from the posterior probability density, which might be multimodal, non-Gaussian, and strongly covariate, we formally quantified the statistical uncertainty in the fundamental stellar properties. This improves our ability to investigate the systematic uncertainty in the stellar models. We used a test set that was initially unseen by the training algorithm to evaluate the performance of the trained neural network. We found that the neural network is capable of reaching an average frequency precision of \(\approx 3\times 10^{-4}\) dex, with an offset of \(\approx 5\times 10^{-5}\) dex. These performance metrics may improve if the network were retrained with, for example, additional grid points or with the aim to reach a lower target loss. However, the flexibility of neural networks Figure 5: Difference between the inferred and model values of \(\mathbf{M}\), \(Z_{\rm in}\) and age relative to the uncertainty of the inference, for a set of 25 simulated stars. The uncertainty is taken as the standard deviation of the marginalized posterior distributions of each of the parameters (see Sec. 5.1.2 for details). Figure 6: Samples drawn from the posterior distribution of simulated star 14 are shown in red. Subsequent runs using the same truth values (M=1.696 M\({}_{\odot}\), \(Z_{\rm in}\)=0.0136, \(r\)=8.76 Myr) but different noise realisations are shown in black. The truth values are shown in blue (see Sec. 5.1.3 for details). Figure 7: Samples drawn from the posterior distribution of simulated star 21 are shown in red. Subsequent runs using the same truth values (M=1.777 M\({}_{\odot}\), \(Z_{\rm in}\)=0.0182, \(r\)=12.97 Myr) but different noise realisations are shown in black. The truth values are shown in blue (see Sec. 5.1.4 for details). allows for the extension of the model grid to include additional variables, such as initial helium abundance or convective overshoot, by increasing the number of neurons in the network architecture (see, e.g., Hendriks & Aerts, 2019; Lyttle et al., 2021). We applied our method to 25 simulated stars to quantify our accuracy and precision in the recovery of our input stellar properties. We showed the method to be capable of faithfully reproducing the true input parameters with the exception of simulated star 14. On investigation, we found a bias in the reported metallicity and age of this simulated star to be a result of the error in prediction by the neural network. The bias is of order 1.5 Myr in age for this star, but was confirmed to be systematic in nature. Further improvements in the accuracy of the neural network emulation would reduce the size of this effect. Finally, we have applied our method to observations of a real \(\delta\) Sct star, HD 99506. We found this star to be in the pre-MS stage of evolution and report a random uncertainty of only 3 per cent in age. Systematic uncertainty, such as that arising from missing or imperfect physics, has not been accounted for in this number but our methods pave the way for the quantification of the systematic uncertainty in future work. Similarly, the inclusion of additional physics such as rotation, and additional modes such as those of higher degree or different azimuthal order, would be trivial extensions to this framework in future. ## Acknowledgements SJM was supported by the Australian Research Council (ARC) through Future Fellowship FT210100485. MBN and GRD acknowledge support from the UK Space Agency. TRB acknowledges support from Australian Research Council through Laureate Fellowship FL220100117. OJS and AJL acknowledge the support of the Science and Technology Facilities Council. This paper has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (CartographY GA. 804752). This paper includes data collected by the _TESS_ mission. Funding for the _TESS_ mission is provided by the NASA's Science Mission Directorate. ## Data Availability The data will be supplied upon reasonable request. \begin{table} \begin{tabular}{c c c} \hline frequency & \(n\) & \(\ell\) \\ \([\mathrm{d}^{-1}]\) & & \\ \hline 33.48997 & 3 & 0 \\ 46.46549 & 5 & 0 \\ 53.51968 & 6 & 0 \\ 60.60165 & 7 & 0 \\ 67.65639 & 8 & 0 \\ 35.99870 & 3 & 1 \\ 50.04504 & 5 & 1 \\ 57.18150 & 6 & 1 \\ 64.19957 & 7 & 1 \\ \hline \end{tabular} \end{table} Table 2: Identified modes for HD 99506 used as inputs in the modelling. Figure 8: The posterior distributions in \(M\), \(Z_{\mathrm{in}}\) and age for HD 99506, based on _TESS_ observations. Figure 9: The posterior predicted frequencies overlaid on observed mode frequencies for HD 99506. The greyscale is the observed amplitude spectrum smoothed by a Gaussian of width 4 times the frequency resolution. ## Software Below we include additional software used in this work which has not explicitly been mentioned above. * Python Van Rossum and Drake Jr (1995) * matplotlib Hunter (2007) * Numpy Harris et al. (2020) * Scipy Virtanen et al. (2020) * Pandas Reback et al. (2020) * corner Foreman-Mackey (2016) * lightkurve Lightkurve Collaboration et al. (2018) * echelle Hey and Ball (2020)
2307.13007
Sparse-firing regularization methods for spiking neural networks with time-to-first spike coding
The training of multilayer spiking neural networks (SNNs) using the error backpropagation algorithm has made significant progress in recent years. Among the various training schemes, the error backpropagation method that directly uses the firing time of neurons has attracted considerable attention because it can realize ideal temporal coding. This method uses time-to-first spike (TTFS) coding, in which each neuron fires at most once, and this restriction on the number of firings enables information to be processed at a very low firing frequency. This low firing frequency increases the energy efficiency of information processing in SNNs, which is important not only because of its similarity with information processing in the brain, but also from an engineering point of view. However, only an upper limit has been provided for TTFS-coded SNNs, and the information-processing capability of SNNs at lower firing frequencies has not been fully investigated. In this paper, we propose two spike timing-based sparse-firing (SSR) regularization methods to further reduce the firing frequency of TTFS-coded SNNs. The first is the membrane potential-aware SSR (M-SSR) method, which has been derived as an extreme form of the loss function of the membrane potential value. The second is the firing condition-aware SSR (F-SSR) method, which is a regularization function obtained from the firing conditions. Both methods are characterized by the fact that they only require information about the firing timing and associated weights. The effects of these regularization methods were investigated on the MNIST, Fashion-MNIST, and CIFAR-10 datasets using multilayer perceptron networks and convolutional neural network structures.
Yusuke Sakemi, Kakei Yamamoto, Takeo Hosomi, Kazuyuki Aihara
2023-07-24T11:55:49Z
http://arxiv.org/abs/2307.13007v1
# Sparse-firing regularization methods for spiking neural networks with time-to-first spike coding ###### Abstract The training of multilayer spiking neural networks (SNNs) using the error backpropagation algorithm has made significant progress in recent years. Among the various training schemes, the error backpropagation method that directly uses the firing time of neurons has attracted considerable attention because it can realize ideal temporal coding. This method uses time-to-first spike (TTFS) coding, in which each neuron fires at most once, and this restriction on the number of firings enables information to be processed at a very low firing frequency. This low firing frequency increases the energy efficiency of information processing in SNNs, which is important not only because of its similarity with information processing in the brain, but also from an engineering point of view. However, only an upper limit has been provided for TTFS-coded SNNs, and the information-processing capability of SNNs at lower firing frequencies has not been fully investigated. In this paper, we propose two spike timing-based sparse-firing (SSR) regularization methods to further reduce the firing frequency of TTFS-coded SNNs. The first is the membrane potential-aware SSR (M-SSR) method, which has been derived as an extreme form of the loss function of the membrane potential value. The second is the firing condition-aware SSR (F-SSR) method, which is a regularization function obtained from the firing conditions. Both methods are characterized by the fact that they only require information about the firing timing and associated weights. The effects of these regularization methods were investigated on the MNIST, Fashion-MNIST, and CIFAR-10 datasets using multilayer perceptron networks and convolutional neural network structures. ## Introduction Spiking neural networks (SNNs) can process information in the form of spikes in a manner similar to the way information is processed in the brain. SNNs are thereby expected to be able to achieve both high computational functionality and energy efficiency [1]. The spikes are represented as all-or-none binary values, and how information is represented by spikes is closely related to the information-processing mechanism in SNNs. The spike-based information representation methods are divided into two major categories, rate coding and temporal coding [2, 3]. In rate coding, information is contained in the average number of spikes generated by a neuron. In this case, the firing frequency can take approximately continuous values as a function of the input intensities; therefore, the resulting SNNs can be treated as differentiable models similar to an artificial neural network (ANN). Using rate coding, ANNs can be converted to SNNs, and the high learning ability of ANNs has been successfully transferred to SNNs [4, 5, 6]. However, when rate coding is used, information processing in the SNNs is just an approximation of that in ANNs. Furthermore, the precise approximation of an ANN requires many spikes, which reduces energy efficiency when implemented in neuromorphic hardware [7]. It has been experimentally shown that physiologically, neurons in certain brain regions or specific neuron types exhibit extremely sparse firing characteristics [8], and it is thought that temporal coding using not only the firing frequency but also the firing time is realized in at least some brain regions [9, 10, 11, 12].
2307.07700
NeurASP: Embracing Neural Networks into Answer Set Programming
We present NeurASP, a simple extension of answer set programs by embracing neural networks. By treating the neural network output as the probability distribution over atomic facts in answer set programs, NeurASP provides a simple and effective way to integrate sub-symbolic and symbolic computation. We demonstrate how NeurASP can make use of a pre-trained neural network in symbolic computation and how it can improve the neural network's perception result by applying symbolic reasoning in answer set programming. Also, NeurASP can be used to train a neural network better by training with ASP rules so that a neural network not only learns from implicit correlations from the data but also from the explicit complex semantic constraints expressed by the rules.
Zhun Yang, Adam Ishay, Joohyung Lee
2023-07-15T04:03:17Z
http://arxiv.org/abs/2307.07700v1
# NeurASP: Embracing Neural Networks into Answer Set Programming ###### Abstract We present \(\mathrm{NeurASP}\), a simple extension of answer set programs by embracing neural networks. By treating the neural network output as the probability distribution over atomic facts in answer set programs, \(\mathrm{NeurASP}\) provides a simple and effective way to integrate sub-symbolic and symbolic computation. We demonstrate how \(\mathrm{NeurASP}\) can make use of a pre-trained neural network in symbolic computation and how it can improve the neural network's perception result by applying symbolic reasoning in answer set programming. Also, \(\mathrm{NeurASP}\) can be used to train a neural network better by training with ASP rules so that a neural network not only learns from implicit correlations from the data but also from the explicit complex semantic constraints expressed by the rules. ## 1 Introduction The integration of low-level perception with high-level reasoning is one of the oldest problems in Artificial Intelligence. Today, the topic is revisited with the recent rise of deep neural networks. Several proposals were made to implement the reasoning process in complex neural network architectures, e.g., [1, 1, 1, 2, 3, 4, 5, 6, 7]. However, it is still not clear how complex and high-level reasoning, such as default reasoning [15], ontology reasoning [1], and causal reasoning [2], can be successfully computed by these approaches. The latter subject has been well-studied in the area of knowledge representation (KR), but many KR formalisms, including answer set programming (ASP) [11, 12], are logic-oriented and do not incorporate high-dimensional vector space and pre-trained models for perception tasks as handled in deep learning, which limits the applicability of KR in many practical applications involving data and uncertainty. In this paper, we present a simple extension of answer set programs by embracing neural networks. Following the idea of DeepProbLog [13], by treating the neural network output as the probability distribution over atomic facts in answer set programs, the proposed \(\mathrm{NeurASP}\) provides a simple and effective way to integrate sub-symbolic and symbolic computation. We demonstrate how \(\mathrm{NeurASP}\) can be useful for some tasks where both perception and reasoning are required. Reasoning can help identify perception mistakes that violate semantic constraints, which in turn can make perception more robust. For example, a neural network for object detection may return a bounding box and its classification "car," but it may not be clear whether it is a real car or a toy car. The distinction can be made by applying reasoning about the relations with the surrounding objects and using commonsense knowledge. Or when it is unclear whether a round object attached to the car is a wheel or a doughnut, the reasoner could conclude that it is more likely to be a wheel by applying commonsense knowledge. In the case of a neural network that recognizes digits in a given Sudoku board, the neural network may get confused if a digit next to \(1\) in the same row is \(1\) or \(2\), but the reasoner can conclude that it cannot be \(1\) by applying the constraints for Sudoku. Another benefit of this hybrid approach is that it alleviates the burden of neural networks when the constraints/knowledge are already given. Instead of building a large end-to-end neural network that learns to solve a Sudoku puzzle given as an image, we can let a neural network only do digit recognition and use ASP to find the solution of the recognized board. This makes the design of the neural network simpler and the required training dataset much smaller. Also, when we need to solve some variation of Sudoku, such as Anti-knight or Offset Sudoku, the modification is simpler than training another large neural network from scratch to solve the new puzzle. \(\mathrm{NeurASP}\) can also be used to train a neural network together with rules so that a neural network not only learns from implicit correlations from the data but also from explicit complex semantic constraints expressed by ASP rules. The _semantic loss_[22] obtained from the reasoning module can be backpropagated into the rule layer and then further into neural networks via neural atoms. This sometimes makes a neural network learn better even with fewer data. Compared to DeepProbLog, \(\mathrm{NeurASP}\) supports a rich set of KR constructs supported by answer set programming that allows for convenient representation of complex knowledge. It utilizes an ASP solver in computation instead of constructing circuits as in DeepProbLog. The paper is organized as follows. Section 2 introduces the syntax and the semantics of \(\mathrm{NeurASP}\). Section 3 illustrates how reasoning in \(\mathrm{NeurASP}\) can enhance the perception result by considering relations among objects perceived by pre-trained neural networks. Section 4 presents learning in \(\mathrm{NeurASP}\) where ASP rules work as a semantic regularizer for training neural networks so that neural networks are trained not only from data but also from rules. Section 5 examines related works and Section 6 concludes. The implementation of \(\mathrm{NeurASP}\), as well as codes used for the experiments, is publicly available online at [https://github.com/azreasoners/NeurASP](https://github.com/azreasoners/NeurASP). ## 2 \(\mathrm{NeurASP}\) We present the syntax and the semantics of \(\mathrm{NeurASP}\). ### Syntax We assume that neural network \(M\) allows an arbitrary tensor as input whereas the output is a matrix in \(\mathbb{R}^{e\times n}\), where \(e\) is the number of random events predicted by the neural network and \(n\) is the number of possible outcomes for each random event. Each row of the matrix represents the probability distribution of the outcomes of each event. For example, if \(M\) is a neural network for MNIST digit classification, then the input is a tensor representation of a digit image, \(e\) is \(1\), and \(n\) is \(10\). If \(M\) is a neural network that outputs a Boolean value for each edge in a graph, then \(e\) is the number of edges and \(n\) is \(2\). Given an input tensor \(\mathbf{t}\), by \(M(\mathbf{t})\), we denote the output matrix of \(M\). The value \(M(\mathbf{t})[i,j]\) (where \(i\in\{1,\ldots,e\}\), \(j\in\{1,\ldots,n\}\)) is the probability of the \(j\)-th outcome of the \(i\)-th event upon the input \(\mathbf{t}\). In \(\mathrm{NeurASP}\), the neural network \(M\) above can be represented by a _neural atom_ of the form \[nn(m(e,t),[v_{1},\ldots,v_{n}]), \tag{1}\] where (i) \(nn\) is a reserved keyword to denote a neural atom; (ii) \(m\) is an identifier (symbolic name) of the neural network \(M\); (iii) \(t\) is a list of terms that serves as a "pointer" to an input data; related to it, there is a mapping \(\mathbf{D}\) (implemented by an external Python code) that turns \(t\) into an input tensor; (iv) \(v_{1},\ldots,v_{n}\) represent all \(n\) possible outcomes of each of the \(e\) random events. Each neural atom (1) introduces propositional atoms of the form \(c=v\), where \(c\in\{m_{1}(t),\ldots,m_{e}(t)\}\) and \(v\in\{v_{1},\ldots,v_{n}\}\). The output of the neural network provides the probabilities of the introduced atoms (defined in Section 2.2). **Example 1**: _Let \(M_{digit}\) be a neural network that classifies an MNIST digit image. The input of \(M_{digit}\) is (a tensor representation of) an image and the output is a matrix in \(\mathbb{R}^{1\times 10}\). The neural network can be represented by the neural atom_ \[nn(digit(1,d),\ [0,1,2,3,4,5,6,7,8,9]),\] _which introduces propositional atoms \(digit_{1}(d)=0\), \(digit_{1}(d)=1\), \(\ldots\), \(digit_{1}(d)=9\)._ **Example 2**: _Let \(M_{sp}\) be another neural network for finding the shortest path in a graph with 24 edges. The input is a tensor encoding the graph and the start/end nodes of the path, and the output is a matrix in \(\mathbb{R}^{24\times 2}\). This neural network can be represented by the neural atom_ \[nn(sp(24,g),\ [\textsc{true},\textsc{false}]).\] A \(\mathrm{NeurASP}\)_program_\(\Pi\) is the union of \(\Pi^{asp}\) and \(\Pi^{nn}\), where \(\Pi^{asp}\) is a set of propositional rules (standard rules as in ASP-Core 2 (Calimeri _et al._, 2020)) and \(\Pi^{nn}\) is a set of neural atoms. Let \(\sigma^{nn}\) be the set of all atoms \(m_{i}(t)=v_{j}\) that is obtained from the neural atoms in \(\Pi^{nn}\) as described above. We require that, in each rule \(\imath Head\leftarrow\imath Body\) in \(\Pi^{asp}\), no atoms in \(\sigma^{nn}\) appear in \(\imath Head\). We could allow schematic variables into \(\Pi\), which are understood in terms of grounding as in standard answer set programs. We find it convenient to use rules of the form \[nn(m(e,t),[v_{1},\ldots,v_{n}])\leftarrow\imath Body \tag{2}\] where \(\imath Body\) is either identified by \(\top\) or \(\bot\) during grounding so that (2) can be viewed as an abbreviation of multiple (variable-free) neural atoms (1). **Example 3**: _An example \(\mathrm{NeurASP}\) program \(\Pi_{digit}\) is as follows, where \(d_{1}\) and \(d_{2}\) are terms representing two images. Each image is classified by neural network \(M_{digit}\) as one of the values in \(\{0,\ldots,9\}\). The addition of two digit-images is the sum of their values._ \[\begin{array}{l}img(d_{1}).\\ img(d_{2}).\\ nn(digit(1,X),[0,1,2,3,4,5,6,7,8,9])\leftarrow img(X).\\ addition(A,B,N)\leftarrow\mathit{digit}_{1}(A)=N_{1},digit_{1}(B)=N_{2},\\ N=N_{1}+N_{2}.\end{array} \tag{3}\] _The neural network \(M_{digit}\) outputs 10 probabilities for each image. The addition is applied once the digits are recognized and its probability is induced from the perception as we explain in the next section._ ### Semantics For any \(\mathrm{NeurASP}\) program \(\Pi=\Pi^{asp}\cup\Pi^{nn}\), we first obtain its ASP counterpart \(\Pi^{\prime}=\Pi^{asp}\cup\Pi^{ch}\) where \(\Pi^{ch}\) consists of the following set of rules for each neural atom (1) in \(\Pi^{nn}\) \[\{m_{i}(t)=v_{1};\ \ldots;m_{i}(t)=v_{n}\}=1\quad\text{for }i\in\{1,\ldots e\}.\] The above rule (in the language of clingo) means to choose exactly one atom in between the set braces.1 We define the _stable models_ of \(\Pi\) as the stable models of \(\Pi^{\prime}\), and define the _total choices_ of \(\Pi\) as the stable models of \(\Pi^{ch}\). For each total choice \(C\) of \(\Pi\), we use \(\imath Num(C,\Pi)\) to denote the number of stable models of \(\Pi\) that satisfy \(C\). We require a \(\mathrm{NeurASP}\) program \(\Pi\) to be _coherent_ such that \(\imath Num(C,\Pi)>0\) for every total choice \(C\) of \(\Pi\). Footnote 1: In practice, each atom \(m_{i}(t)=v\) is written as \(m(i,t,v)\). To define the probability of a stable model, we first define the probability of an atom \(m_{i}(t)=v_{j}\) in \(\sigma^{nn}\). Recall that there is an external mapping \(\mathbf{D}\) that turns \(t\) into a specific input tensor of \(M\). The probability of each atom \(m_{i}(t)\!=\!v_{j}\) is defined as \(M(\mathbf{D}(t))[i,j]\): \[P_{\Pi}(m_{i}(t)\!=\!v_{j})=M(\mathbf{D}(t))[i,j].\] For instance, recall that the output matrix of \(M_{digit}(\mathbf{D}(d))\) in Example 3 is in \(\mathbb{R}^{1\times 10}\). The probability of atom \(digit_{1}(d)=k\) is \(M_{digit}(\mathbf{D}(d))[1,k\!+\!1]\). Given an interpretation \(I\), by \(I|_{\sigma^{nn}}\), we denote the projection of \(I\) onto \(\sigma^{nn}\). Since \(I|_{\sigma^{nn}}\) is a total choice of \(\Pi\), \(1Num(I|_{\sigma^{nn}},\Pi)\) is the number of stable models of \(\Pi\) that agree with \(I|_{\sigma^{nn}}\) on \(\sigma^{nn}\). The probability of a stable model \(I\) of \(\Pi\) is defined as the product of the probability of each atom \(c=v\) in \(I|_{\sigma^{nn}}\), divided by the number of stable models of \(\Pi\) that agree with \(I|_{\sigma^{nn}}\) on \(\sigma^{nn}\). That is, for any interpretation \(I\), \[P_{\Pi}(I)=\begin{cases}\frac{\prod\limits_{c=v\in I|_{\sigma^{nn}}}P_{\Pi}( c=v)}{1Num(I|_{\sigma^{nn}},\Pi)}&\text{if $I$ is a stable model of $\Pi$;}\\ 0&\text{otherwise.}\end{cases}\] An _observation_ is a set of ASP constraints (i.e., rules of the form \(\perp\leftarrow\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! The neural network model \(M_{label}\) outputs that the red boxes are persons, the yellow boxes are cars, and the green box is a truck. Upon this input and the rules above, \(\mathrm{NeurASP}\) allows us to derive that the two cars in image \(i_{1}\) are _toy_ cars, whereas the two cars in image \(i_{2}\) are not: although they are surrounded by smaller boxes than those of humans, their boxes are not closer to the camera. ### Example: Solving Sudoku Puzzle in Image Consider the task of solving a Sudoku puzzle given as an image. In \(\mathrm{NeurASP}\), we could use a neural network to recognize the digits in the given puzzle and use an ASP solver to compute the solution instead of having a single network that accounts for both perception and solving. We use the following \(\mathrm{NeurASP}\) program \(\Pi_{sudoku}\) to first identify the digits in each grid cell on the board and then find the solution by assigning digits to all empty grid cells. 3 Footnote 3: The expression \(\{a(R,C,N):N=1..9\}=1\) is a shorthand for \(\{a(R,C,1);\ldots;a(R,C,9)\}=1\) in the language of clingo. % identify the number in each of the 81 positions nn(identify(81, img), [empty,1,2,3,4,5,6,7,8,9]). % assign one number N to each position (R,C) (R,C,N) :- identify(Pos, img,N), R=Pos/9, C=Pos\(\backslash\)9, N!=empty. {a(R,C,N):N=1..9}=1 - identify(Pos, img, empty), R=Pos/9, C=Pos\(\backslash\)9. % no number repeats in the same row :- a(R,C,N), a(R,C2,N), C11=C2. % no number repeats in the same column :- a(R,C,N), a(R2,C,N), R11=R2. % no number repeats in the same 3*3 box :- a(R,C,N), a(R1,C1,N), R1=R1, C1=C1, ((R/3)*3 + C/3) = ((R1/3)*3 + C/3) = ((R1/3)*3 + C1/3). The neural network model \(M_{identify}\) is rather simple. It is composed of 5 convolutional layers with dropout, a max pooling layer, and a \(1\times 1\) convolutional layer followed by softmax. Given a Sudoku board image (,png file), neural network \(M_{identify}\) outputs a matrix in \(\mathbb{R}^{81\times 10}\), which represents the probabilities of the values (empty, 1,..., 9) in each of the 81 grid cells. The network \(M_{identify}\) is pre-trained using \(\langle image,label\rangle\) pairs, where each \(image\) is a Sudoku board image generated by _OpenSky Sudoku Generator_ ([http://www.opensky.ca/~jdhildeb/software/sudokugen/](http://www.opensky.ca/~jdhildeb/software/sudokugen/)) and each \(label\) is a vector of length 81 in which 0 is used to represent an empty cell at that position. Let \(Acc_{identify}\) denote the accuracy of identifying all empty cells and the digits on the board given as an image without making a single mistake in a grid cell. Let \(Acc_{sol}\) denote the accuracy of solving a given Sudoku board without making a single mistake in a grid cell. Let \(r\) be the following rule in \(\Pi_{sudoku}\): {a(R,C,N):N=1..9}=1 :- identify(Pos, img, empty), R=Pos/9, C=Pos\(\backslash\)9. Intuitively, \(\Pi_{sudoku}\setminus r\) only checks whether the identified numbers (by neural network \(M_{identify}\)) satisfy the three constraints (the last three rules of \(\Pi_{sudoku}\)), while \(\Pi_{sudoku}\) further checks whether there exists a solution given the identified numbers. As shown in Table 1, the use of reasoning in \(\mathrm{NeurASP}\) program \(\Pi_{sudoku}\setminus r\) improves the accuracy \(Acc_{identify}\) of the neural network \(M_{identify}\) as explained in the introduction. The accuracy \(Acc_{identify}\) is further improved by trying to solve Sudoku completely using \(\Pi_{sudoku}\). Note that the solution accuracy \(Acc_{sol}\) of \(\Pi_{sudoku}\) is equal to the perception accuracy \(Acc_{identify}\) of \(\Pi_{sudoku}\) since the ASP yields a 100% correct solution once the board is correctly identified. Palm _et al._ [2018] use a Graph Neural Network to solve Sudoku but the work restricts attention to textual input of the Sudoku board, not images as we do. Their work achieves 96.6% accuracy after training with 216,000 examples. In comparison, even with the more challenging task of accepting images as input, the number of training examples we used is \(15-25\), which is much less than the number of training examples used in [Palm _et al._, 2018]. Our work takes advantage of the fact that in a problem like Sudoku, where the constraints are explicitly given, a neural network only needs to focus on perception tasks, which is simpler than learning the perception and reasoning together. Furthermore, using the same trained perception neural network \(M_{identify}\), we can solve some elaborations of Sudoku problems by adding the following rules: **[Anti-knight Sudoku]** No number repeats at a knight move :- a(R1,C1,N), a(R2,C2,N), |R1-R2|+|C1-C2|=3. **[Sudoku-X]** No number repeats at the diagonals :- a(R1,C1,N), a(R2,C2,N), R1=C1, R2=C2, R11=R2. :- a(R1,C1,N), a(R2,C2,N), R1+C1=8, R2+C2=8, R11=R2. With neural network only approach, since the neural network needs to learn both perception and reasoning, each of the above variations would require training a complex and different model with a big dataset. However, with \(\mathrm{NeurASP}\), the neural network only needs to recognize digits on the board. Thus solving each Sudoku variation above uses the same pre-trained model for the image input and we only need to add the aforementioned rules to \(\Pi_{sudoku}\). Some Sudoku variations, such as Offset Sudoku, are in colored images. In this case, we need to increase the number of \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline Num of & \(Acc_{identify}\) of & \(Acc_{identify}\) of & \(Acc_{identify}\) of & \(Acc_{sol}\) of \\ Train Data & \(M_{identify}\) & \(\mathrm{NeurASP}\) w/ & \(\mathrm{NeurASP}\) w/ & \(\mathrm{NeurASP}\) w/ \\ & & \(\Pi_{sudoku}\setminus r\) & \(\Pi_{sudoku}\) & \(\Pi_{sudoku}\) \\ \hline 15 & 15\% & 49\% & 71\% & 71\% \\ 17 & 31\% & 62\% & 80\% & 80\% \\ 19 & 72\% & 90\% & 95\% & 95\% \\ 21 & 85\% & 95\% & 98\% & 98\% \\ 23 & 93\% & 99\% & 100\% & 100\% \\ 25 & 100\% & 100\% & 100\% & 100\% \\ \hline \hline \end{tabular} \end{table} Table 1: Sudoku: Accuracy on Test Data channels of \(M_{identify}\) from 1 to 3, and need to retrain the neural network with the colored images. Although not completely elaboration tolerant, compared to the pure neural network approach, this is significantly simpler. For instance, the number of training data needed to get 100% perception accuracy for Offset Sudoku (\(Acc_{identify}\)) is 70, which is still much smaller than what the end-to-end Sudoku solver would require. Using the new network trained, we only need to add the following rule to \(\Pi_{sudoku}\). **[Offset Sudoku]** No number repeats at the same relative position in 3*3 boxes :- a(R1,C1,N), a(R2,C2,N), R1\(\backslash\)3 = R2\(\backslash\)3, C1\(\backslash\)3 = C2\(\backslash\)3, R1!= R2, C1!= C2. ## 4 Learning in \(\mathrm{NeurASP}\) We show how the semantic constraints expressed in \(\mathrm{NeurASP}\) can be used to train neural networks better. ### Gradient Ascent with \(\mathrm{NeurASP}\) In this section, we denote a \(\mathrm{NeurASP}\) program by \(\Pi(\mathbf{\theta})\) where \(\mathbf{\theta}\) is the set of the parameters in the neural network models associated with \(\Pi\). Assume a \(\mathrm{NeurASP}\) program \(\Pi(\mathbf{\theta})\) and a set \(\mathbf{O}\) of observations such that \(P_{\Pi(\mathbf{\theta})}(O)>0\) for each \(O\in\mathbf{O}\). The task is to find \(\hat{\mathbf{\theta}}\) that maximizes the log-likelihood of observations \(\mathbf{O}\) under program \(\Pi(\mathbf{\theta})\), i.e., \[\hat{\mathbf{\theta}}\in\underset{\mathbf{\theta}}{\mathrm{argmax}}\ log(P_{\Pi(\mathbf{ \theta})}(\mathbf{O})),\] which is equivalent to \[\hat{\mathbf{\theta}}\in\underset{\mathbf{\theta}}{\mathrm{argmax}}\ \sum\limits_{O\in \mathbf{O}}log(P_{\Pi(\mathbf{\theta})}(O)).\] Let \(\mathbf{p}\) denote the probabilities of the atoms in \(\sigma^{nn}\). Since \(\mathbf{p}\) is indeed the outputs of the neural networks in \(\Pi(\mathbf{\theta})\), we can compute the gradient of \(\mathbf{p}\) w.r.t. \(\mathbf{\theta}\) through backpropagation. Then the gradient of \(\sum\limits_{O\in\mathbf{O}}log(P_{\Pi(\mathbf{\theta})}(O))\) w.r.t. \(\mathbf{\theta}\) is \[\underset{O\in\mathbf{O}}{\frac{\partial\sum\limits_{O\in\mathbf{O}}log(P_{ \Pi(\mathbf{\theta})}(O))}{\partial\mathbf{\theta}}} =\sum\limits_{O\in\mathbf{O}}\frac{\partial log(P_{\Pi(\mathbf{\theta })}(O))}{\partial\mathbf{p}}\times\frac{\partial\mathbf{p}}{\partial\mathbf{\theta}}\] where \(\frac{\partial\mathbf{p}}{\partial\mathbf{\theta}}\) can be computed through the usual neural network backpropagation, while \(\frac{\partial log(P_{\Pi(\mathbf{\theta})}(O))}{\partial p}\) for each \(p\in\mathbf{p}\) can be computed as follows. **Proposition 1**: _Let \(\Pi(\mathbf{\theta})\) be a \(\mathrm{NeurASP}\) program and let \(O\) be an observation such that \(P_{\Pi(\mathbf{\theta})}(O)>0\). Let \(p\) denote the probability of an atom \(c=v\) in \(\sigma^{nn}\), i.e., \(p\) denotes \(P_{\Pi(\mathbf{\theta})}(c=v)\). We have that4_ Footnote 4: \(\frac{P_{\Pi(\mathbf{\theta})}(I)}{P_{\Pi(\mathbf{\theta})}(c=v)}\) and \(\frac{P_{\Pi(\mathbf{\theta})}(I)}{P_{\Pi(\mathbf{\theta})}(c=v^{\prime})}\) are still well-defined since the denominators have common factors in \(P_{\Pi(\mathbf{\theta})}(I)\). \[\frac{\partial log(P_{\Pi(\mathbf{\theta})}(O))}{\partial p}=\frac{\sum\limits_{I,\ I=0}^{I,\ T=0}\frac{P_{\Pi(\mathbf{\theta})}(I)}{P_{\Pi(\mathbf{\theta})}(c=v)}- \sum\limits_{I,\nu:\ l=0}^{I,\nu:\ l=0}\frac{P_{\Pi(\mathbf{\theta})}(I)}{P_{\Pi( \mathbf{\theta})}(c=v^{\prime})}}{\sum\limits_{I:\ l=0}P_{\Pi(\mathbf{\theta})}(I)}.\] Intuitively, the proposition tells us that each interpretation \(I\) that satisfies \(O\) tends to increase the value of \(p\) if \(I\models c=v\), and decrease the value of \(p\) if \(I\models c=v^{\prime}\) such that \(v^{\prime}\neq v\). \(\mathrm{NeurASP}\) internally calls clingo to find all stable models \(I\) of \(\Pi(\mathbf{\theta})\) that satisfy \(O\) and uses PyTorch to obtain the probability of each atom \(c=v\) in \(\sigma^{nn}\). ### Experiment 1: Learning Digit Classification from Addition All experiments in Section 4 were done on Ubuntu 18.04.2 LTS with two 10-cores CPU Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz and four GP104 [GeForce GTX 1080]. The digit addition problem is a simple example used in [10] to illustrate DeepProbLog's ability for both logical reasoning and deep learning. The task is, given a pair of digit images (MNIST) and their sum as the label, to let a neural network learn the digit classification of the input images. The problem can be represented by \(\mathrm{NeurASP}\) program \(\Pi_{digit}\) in Example 3. For comparison, we use the same dataset and the same structure of the neural network model used in [10] to train the digit classifier \(M_{digit}\) in \(\Pi_{digit}\). For each pair of images denoted by \(d_{1}\) and \(d_{2}\) and their sum \(n\), we construct the ASP constraint \(\leftarrow\)\({\it{not}}\,addition(d_{1},d_{2},n)\) as the observation \(O\). The training target is to maximize \(log(P_{\Pi_{digit}}(O))\). Figure 2 shows how the forward and the backward propagations are done for \(\mathrm{NeurASP}\) program \(\Pi_{digit}\) in Example 3. The left-to-right direction is the forward computation of the neural network extended with the rule layer, whose output is the probability of the observation \(O\). The right-to-left direction shows how the gradient from the rule layer is backpropagated further into the neural network by the chain rule to update all neural network parameters so as to find the parameter values that maximize the probability of the given observation. Figure 3 shows the accuracy on the test data after each training iteration. The method CNN denotes the baseline used in [10] where a convolutional neural network (with more parameters) is trained to classify the concatenation of the two images into the 19 possible sums. As we can see, the neural networks trained by \(\mathrm{NeurASP}\) and DeepProbLog converge much faster than CNN and have almost the same accuracy at each iteration. However, \(\mathrm{NeurASP}\) spends much less time on training compared to DeepProbLog. The time reported is for one epoch (30,000 iterations in gradient descent). This is because DeepProbLog constructs an Figure 2: NeurASP Gradient Propagation SDD (Sequential Decision Diagram) at each iteration for each training instance (i.e., each pair of images). This example illustrates that generating many SDDs could be more time-consuming than enumerating stable models in \(\mathrm{NeurASP}\) computation. In general, there is a trade-off between the two methods and other examples may show the opposite behavior. ### Experiment 2: Learning How to Solve Sudoku In section 3.2, we used a neural network \(M_{identify}\) to identify the numbers on a Sudoku board and used ASP rules to solve the Sudoku problem. In this section, we use a neural network to learn to solve Sudoku problems. The task is, given the _textual representation_ of an unsolved Sudoku board (in the form of a \(9\times 9\) matrix where an empty cell is represented by 0), to let a neural network learn to predict the solution of the Sudoku board. We use the neural network \(M_{sol}\) from [10] as the baseline. \(M_{sol}\) is composed of 9 convolutional layers and a 1x1 convolution layer followed by softmax. Park trained \(M_{sol}\) using 1 million examples and achieved 70% accuracy using an "inference trick": instead of predicting digits for all empty cells at once, which leads to a poor accuracy, the most probable grid-cell value was predicted one by one. Since the current \(\mathrm{NeurASP}\) implementation is not as scalable as neural network training, training on 1 million examples takes too long. Thus, we construct a dataset of 63,000 + 1000 \(\langle config,label\rangle\) pairs for training and testing. Using Park's method on this relatively small dataset, we observe that \(M_{sol}\)'s highest whole-board accuracy \(Acc_{sol}\)5 is only 29.1% and \(M_{sol}\)'s highest grid-cell accuracy6 is only 89.3% after 63 epochs of training. Footnote 5: The percentage of Sudoku examples that are correctly solved. Footnote 6: The percentage of grid cells having correct digits regardless whether the Sudoku solution is correct. We get a better result by training \(M_{sol}\) with the \(\mathrm{NeurASP}\) program \(\Pi_{sol}\). The program is almost the same as \(\Pi_{identify}\) in Section 3.2 except that it uses \(M_{sol}\) in place of \(M_{identify}\) and the first three rules of \(\Pi_{identify}\) are replaced with ``` nn(sol(81,img),[1,2,3,4,5,6,7,8,9]). n(R,C,N):-sol(Pos,img,N),R=Pos/9,C=Pos\(\backslash\)9. ``` because we do not have to assign the value empty in solving Sudoku. We trained \(M_{sol}\) using \(\mathrm{NeurASP}\) where the training target is to maximize the probability of all stable models that satisfy the observation. On the same test data, after 63 epochs of training, the highest whole-board accuracy of \(M_{sol}\) trained this way is 66.5% and the highest grid-cell accuracy is 96.9% (In other words, we use rules only during training and not during testing). This indicates that including such structured knowledge sometimes helps the training of the neural network significantly. ### Experiment 3: Learning Shortest Path (SP) The experiment is about, given a graph and two points, finding the shortest path between them. We use the dataset from [20], which was used to demonstrate the effectiveness of semantic constraints for enhanced neural network learning. Each example is a 4 by 4 grid \(G=(V,E)\), where \(|V|=16,|E|=24\). The source and the destination nodes are randomly picked up, as well as 8 edges are randomly removed to increase the difficulty. The dataset is divided into 60/20/20 train/validation/test examples. The following \(\mathrm{NeurASP}\) program 7 Footnote 7: \(sp(X,g,true)\) means edge \(X\) is in the shortest path. \(sp(X,Y)\) means there is a path between nodes \(X\) and \(Y\) in the shortest path. ... \(\mathrm{sp(X,Y):-sp(Y,X)}\). together with the union of the following 4 constraints defines the shortest path. ``` %[nr]l.Noremovededgesshouldbepredicted :-sp(X,g,true),removed(X). ``` %[p]2.Predictionmustformasimplepath,i.e.,%thedegreeofeachnodemustbeeither0or2 :-X=0..15,#count(Y:sp(X,Y))=1. :-X=0..15,#count(Y:sp(X,Y))>=3. ``` %[r]3.Every2nodesinthepredictionmustbe %reachable reachable(X,Y):-sp(X,Y). reachable(X,Y):-reachable(X,Z),sp(Z,Y). :-sp(X,A),sp(Y,B),notreachable(X,Y). %[o]4.Predictedpathshouldcontainleastedges :->sp(X,g,true).[1,X] In this experiment, we trained the same neural network model \(M_{sp}\) as in [20], a 5-layer Multi-Layer Perceptron (MLP), but with 4 different settings: (i) MLP only; (ii) together with \(\mathrm{NeurASP}\) with the simple-path constraint **(p)** (which is the only constraint used in [20]); 8 (iii) together with \(\mathrm{NeurASP}\) with simple-path, reachability, and optimization constraints **(p-r-o)**; and (iv) together with \(\mathrm{NeurASP}\) with all 4 constraints **(p-r-o-nr)**. 9 Figure 3: \(\mathrm{NeurASP}\) vs. DeepProbLog Table 2 shows, after 500 epochs of training, the percentage of the predictions on the test data that satisfy each of the constraints **p**, **r**, and **nr**, the path constraint (i.e., **p-r**), the shortest path constraint (i.e., **p-r-o-nr**), and the accuracy w.r.t. the ground truth. The accuracies for the first experiment (MLP Only) show that \(M_{sp}\) was not trained well only by minimizing the cross-entropy loss of its prediction: 100-28.3 = 71.7% of the predictions are not even a simple-path. In the remaining experiments (MLP (x)), instead of minimizing the cross-entropy loss, our training target is changed to maximizing the probability of all stable models under certain constraints. The accuracies under the 2nd and 3rd experiments (MLP (p) and MLP (p-r-o) columns) are increased significantly, showing that (i) including such structured knowledge helps the training of the neural network and (ii) the more structured knowledge included, the better \(M_{sp}\) is trained under \(\mathrm{NeurASP}\). Compared to the results from [22], \(M_{sp}\) trained by \(\mathrm{NeurASP}\) with the simple-path constraint **p** (in the 2nd experiment MLP (p) column) obtains a similar accuracy on predicting the label (28.9% v.s. 28.5%) but a higher accuracy on predicting a simple-path (96.6% v.s. 69.9%). In the 4th experiment (MLP (p-r-o-nr) column) where we added the constraint **nr** saying that "no removed edges can be predicted", the accuracies go down. This is because the new constraint **nr** is about randomly removed edges, changing from one example to another, which is hard to be generalized. ## 5 Related Work Recent years have observed the rising interests of combining perception and reasoning. As mentioned, the work on DeepProbLog [15] is closest to our work. Some differences are: (i) The computation of DeepProbLog relies on constructing circuits such as sequential decision diagrams (SDD) whereas we use an ASP solver internally. (ii) \(\mathrm{NeurASP}\) employs expressive reasoning originating from answer set programming, such as defaults, aggregates, and optimization rules. This not only gives more expressive reasoning but also allows the more semantic-rich constructs as guide to learning. (iii) DeepProbLog requires each training data to be a single atom, while \(\mathrm{NeurASP}\) allows each training data to be arbitrary propositional formulas. Also related is using the semantic constraints to train neural networks better [23], but the constraints used in that work are simple propositional formulas whereas we use answer set programming language, in which it is more convenient to encode complex KR constraints. Logic Tensor Network [1] is also related in that it uses neural networks to provide fuzzy values to atoms. Another approach is to embed logic rules in neural networks by representing logical connectives by mathematical operations and allowing the value of an atom to be a real number. For example, Neural Theorem Prover (NTP) [11] adopts the idea of dynamic neural module networks [1] to embed logic conjunction and disjunction in and/or-module networks. A proof-tree like end-to-end differentiable neural network is then constructed using Prolog's backward chaining algorithm with these modules. Another method that also constructs a proof-tree like neural network is TensorLog [1], which uses matrix multiplication to simulate belief propagation that is tractable under the restriction that each rule is negation-free and can be transformed into a polytree. Graph neural network (GNN) [18] is a neural network model that is gaining more attention recently. Since a graph can encode objects and relations between objects, by learning message functions between the nodes, one can perform certain relational reasoning over the objects. For example, in [13], it is shown that GNN can do well on Sudoku, but the input there is not an image but a textual representation. However, this is still restrictive compared to the more complex reasoning that KR formalisms provide. Neuro-Symbolic Concept Learner [16] separates between visual perception and symbolic reasoning. It shows the data-efficiency by using only 10% of the training data and achieving the state-of-the-art 98% accuracy on CLEVR dataset. Our results are similar in the sense that using symbolic reasoning, we could use fewer data to achieve a high accuracy. \(\mathrm{NeurASP}\) is similar to \(\mathrm{LP}^{\mathrm{MLN}}\)[10] in the sense that they are both probabilistic extensions of ASP and their semantics are defined by translations into ASP [10]. \(\mathrm{LP}^{\mathrm{MLN}}\) allows any rules to be weighted, whereas \(\mathrm{NeurASP}\) uses standard ASP rules. ## 6 Conclusion We showed that \(\mathrm{NeurASP}\) can improve the neural network's perception result by applying reasoning over perceived objects and also can help neural network learn better by compensating the small size data with knowledge and constraints. Since \(\mathrm{NeurASP}\) is a simple integration of ASP with neural networks, it retains each of ASP and neural networks in individual forms, and can directly utilize the advances in each of them. The current implementation is a prototype and not highly scalable due to a naive computation of enumerating stable models. The future work includes how to make learning faster, and also analyzing the effects of the semantic constraints more systematically. \begin{table} \begin{tabular}{c|c|c|c} \hline \hline Predictions & MLP Only & MLP & MLP \\ satisfying & & (p) & (p-r-o) & (p-r-o-nr) \\ \hline p & 28.3\% & 96.6\% & **100**\% & 30.1\% \\ r & 88.5\% & **100**\% & **100**\% & 87.3\% \\ nr & 32.9\% & 36.3\% & 45.7\% & **70.5**\% \\ p-r & 28.3\% & 96.6\% & **100**\% & 30.1\% \\ p-r-o-nr & 23.0\% & 33.2\% & **45.7**\% & 24.2\% \\ label (ground truth) & 22.4\% & 28.9\% & **40.1**\% & 22.7\% \\ \hline \hline \end{tabular} \end{table} Table 2: Shortest Path: Accuracy on Test Data: columns denote MLPs trained with different rules; each row represents the percentage of predictions that satisfy the constraints ## Acknowledgments We are grateful to the anonymous referees for their useful comments. This work was partially supported by the National Science Foundation under Grant IIS-1815337.
2307.01417
Free energy of Bayesian Convolutional Neural Network with Skip Connection
Since the success of Residual Network(ResNet), many of architectures of Convolutional Neural Networks(CNNs) have adopted skip connection. While the generalization performance of CNN with skip connection has been explained within the framework of Ensemble Learning, the dependency on the number of parameters have not been revealed. In this paper, we show that Bayesian free energy of Convolutional Neural Network both with and without skip connection in Bayesian learning. The upper bound of free energy of Bayesian CNN with skip connection does not depend on the oveparametrization and, the generalization error of Bayesian CNN has similar property.
Shuya Nagayasu, Sumio Watanabe
2023-07-04T00:48:30Z
http://arxiv.org/abs/2307.01417v1
# Free energy of Bayesian Convolutional Neural Network with Skip Connection ###### Abstract Since the success of Residual Network(ResNet), many of architectures of Convolutional Neural Networks(CNNs) have adopted skip connection. While the generalization performance of CNN with skip connection has been explained within the framework of Ensemble Learning, the dependency on the number of parameters have not been revealed. In this paper, we show that Bayesian free energy of Convolutional Neural Network both with and without skip connection in Bayesian learning. The upper bound of free energy of Bayesian CNN with skip connection does not depend on the overparametrization and, the generalization error of Bayesian CNN has similar property. **Keywords.**_Learning theory; Convolutional Neural Network; Bayesian Learning; Free Energy_ ## 1 Introduction Convolutional Neural Networks (CNNs) are a type of Neural Networks mainly used for computer vision. CNNs have been shown the high performance with deep layers [1, 2]. Residual Network(ResNet) [3] adopted the skip connection for addressing the problem that the loss function of CNN with deep layers does not decrease well through optimization. After success of ResNet, the CNNs with more than 100 layers are realized. The high performance of ResNet has been explained by similarity to the ensemble learning [4, 5, 6]. On the other hand, there is a common issue in neural networks that the reason why the overparametrized deep neural network generalized has been unknown yet. In conventional learning theory, if the Fisher information matrix of a learning machine is positive definite, and the data size is sufficient large, the generalization error of the learning machine is determined from the number of its parameter in maximum likelihood estimator [7]. The similar property is shown in free energy and generalization error in Bayesian learning [8, 9, 10]. From these characteristics of generalization error and free energy some information criteria such as AIC, BIC, MDL are proposed. However, most of the hierarchical models such as neural networks have degenerated Fisher information matrix. In such models, the Bayesian generalization error and free energy are determined by a rational number called Real Log Canonical Threshold(RLCT) and that is smaller than the number of parameters [11, 12]. In particular, RLCTs are revealed in some concrete models such as three layered neural networks [13, 14], normal mixtures [15, 16], Poisson mixtures [17], Boltzmann machine [18, 19], reduced rank regression [20], Latent Dirichlet allocation [21], matrix factorization, and Bayesian Network [22]. While RLCTs of many hierarchical models are revealed, that of neural networks with multiple layer of nonlinear transformation has not been clarified. Yet the possibility of that is shown in [23], the RLCT of Deep Neural Network is revealed [24]. On the other hand the RLCT of neural networks other than DNN was not explored. In Bayesian learning for neural networks, how to realize the posterior is important. There exist approaches for generating posterior, Variational Approximation or Markov chain Monte Carlo(MCMC) methods. Variational Approximation for neural netowrks, Variational Autoencoder [25] or Monte Carlo dropout [26] are practically used. Also for CNNs, variational approach for Bayesian inference was proposed [27]. MCMC for neural networks, Hamiltonian Monte Carlo or Langevin Dynamics are useful for sampling from posterior. Stochastic Gradient Langevin Dynamics(SGLD) [28] is a MCMC method applying Stochastic Gradient Descent instead of Gradient Descent to Langevin Dynamics is popular MCMC for Bayesian Neural Networks. [29] used SGLD for generating posterior of CNNs. In this paper we clarify the free energy and generalization error of Bayesian CNNs with and without skip connection. In both case the free energy and generalization error don't depend on the number of parameters in redundant filters. Then, in case with skip connection, the redundant layers affect the free energy and generalization error whereas they don't affect in case without skip connection. This paper consists of seven main sections and one appendix. In section2, we describe the setting of Convolutional Neural Network analyzed in this paper. In section3, we explain the basic terms of the Bayesian learning. In section4, we note the main theorem of this paper. In section5, we conducts the experiment of synthetic data. In section6 and section7, we discuss about the theorem in this paper and conclusion. In appendixA, we prove the main theorem of this paper. ## 2 Convolutional Neural Network In this section we describe the function of Convolutional Neural Network. First, we explain CNN without skip connection. The kernel size is \(3\times 3\) with zero padding and 1-stride. The activation function is ReLU. The numbers of the layers of the CNN are \(K_{1}(\geq 3)\) for Convolutional Layers and \(K_{2}(\geq 3)\) for Fully Connected Layers. Let \(x\in\mathbb{R}^{L_{1}\times L_{2}\times H_{1}}\) be an input vector generated from \(q(x)\) with bounded support and \(y\in\mathbb{N}\) be an output vector with \(q(y|x)\) whose support is \(\{1,\dots,H_{K_{1}+K_{2}}\}\). We define \(w^{(k)}\in\mathbb{R}^{3\times 3\times H_{k-1}\times H_{k}}\), \(b^{(k)}\in\mathbb{R}^{H_{k}}\) as weight and bias parameters in each Convolutional Layer (\(2\leq k\leq K_{1}\)). \(f^{(k)}\in\mathbb{R}^{L_{1}\times L_{2}\times H_{k}}\) is output of each layer for \(1\leq k\leq K_{1}\). \(\text{Conv}(f,w)\) is the convolution operation with zero padding and 1-stride: \[\text{Conv}(f^{k-1},w^{k})_{l_{1},l_{2},h_{k}}=\sum_{h_{k-1}}\sum_{p=1,q=1}^{ p=3,q=3}f_{l_{1}+p-1,l_{2}+p-1,h_{k-1}}w_{p,q,h_{k-1},h_{k}}. \tag{1}\] We define \(g(b^{k}):\mathbb{R}^{H_{k}}\rightarrow\mathbb{R}^{L_{1}\times L_{2}\times H_{ k}}\) as \[g(b^{(k)})_{l_{1},l_{2}}=b^{(k)} \tag{2}\] for \(1\leq l_{1}\leq L_{1},1\leq l_{2}\leq L_{2}\). By using \(w^{(k)}\), \(g(b^{(k)})\), and \(f^{(k-1)}\), \(f^{(k)}\) is described by \[f^{(k)}(w,b,x)=\sigma(\text{Conv}(f^{(k-1)}(w,b,x),w^{(k)})+g(b^{(k)})) \tag{3}\] where \(w,b\) are the set of all weight and bias parameters. \(\sigma()\) is a function that applies the ReLU to all the elements of the input tensor. The output of \(k=K_{1}+1\) layer is result of Global Average Pooling on \(k=K_{1}\) layer: \[f^{(K_{1}+1)}(w,b,x)=\frac{1}{L_{1}L_{2}}\sum_{l_{1}=1}^{l_{1}=L_{1}}\sum_{l_{ 2}=1}^{l_{2}=L_{2}}f^{(K_{1})}(w,b,x)_{l_{1},l_{2}}. \tag{4}\] Let \(w^{(k)}\in\mathbb{R}^{H_{k}}\times\mathbb{R}^{H_{k-1}}\), \(b^{(k)}\in\mathbb{R}^{H_{k}}\) be weight and bias parameters in each Fully Connected Layer (\(K_{1}+2\leq k\leq K_{1}+K_{2}\)). For \(K_{1}+2\leq k\leq K_{1}+K_{2}-1\), \(f^{(k)}\) is defined by \[f^{(k)}(w,b,x)=\sigma(w^{(k)}f^{(k-1)}(w,b,x)+b^{(k)}), \tag{5}\] and for \(k=K_{1}+K_{2}\), \[f^{(K_{1}+K_{2})}(w,b,x)=\text{softmax}(w^{(k)}f^{(k-1)}(w,b,x)+b^{(k)}), \tag{6}\] where \(\text{softmax}()\) is a softmax function \[\text{softmax}(z)_{i}=\frac{e^{z_{i}}}{\sum_{j=1}^{J}e^{z_{j}}}. \tag{7}\] The output of the model is represented stochastically \[y\sim\text{Categorical}(f^{(K_{1}+K_{2})}(w,b,x)) \tag{8}\] where Categorical() is a categorical distribution. Then we describe CNN with skip connection. The number of layers within the skip connection is \(K_{s}\) and the number of skip connection is \(M\). The output of the layer with skipped connection is described by \[f^{(mK_{s}+2)}(w,b,x)=\sigma(\text{Conv}(f^{(mK_{s}+1)}(w,b,x),w ^{(mK_{s}+2)})\] \[+B^{(mK_{s}+2)}+f^{((m-1)K_{s}+2)}(w,b,x)). \tag{9}\] In this case, CNN satisfies the following conditions \[K_{1} =MK_{s}+2\] \[H^{mK_{s}+2} =\text{const}(1\leq m\leq M). \tag{10}\] The other conditions are the same as the case without skip connection. Figure1 shows the configuration of Convolutional Neural Network analyzed in this paper. ## 3 Free energy in Bayesian Learning ### Bayesian Learning Let \(X^{n}=(X_{1},\cdots X_{n})\) and \(Y^{n}=(Y_{1},\cdots Y_{n})\) be training data and labels. \(n\) is the number of the data. These data and labels are generated from a true distribution \(q(x,y)=q(y|x)q(x)\). The prior distribution \(\varphi(w)\), the learning model \(p(y|x,w)\) is given on the bounded parameter set \(W\). Then the posterior distribution is defined by \[p(w|X^{n},Y^{n})=\frac{1}{Z(Y^{n}|X^{n})}\varphi(w)\prod_{i=1}^{n}p(Y_{i}|X_{i},w) \tag{11}\] where \(Z_{n}=Z(Y^{n}|X^{n})\) is normalizing constant denoted as marginal likelihood: \[Z_{n}=\int\varphi(w)\prod_{i=1}^{n}p(Y_{i}|X_{i},w)\mathrm{d}w. \tag{12}\] The free energy is negative log value of marginal likelihood \[F_{n}=-\log Z_{n}. \tag{13}\] Free energy is equivalent to evidence and stochastic complexity. The posterior predictive distribution is defined as the average of the model by posterior: \[p^{*}(y|x)=p(y|x,X^{n},Y^{n})=\int p(y|x,w)p(w|X^{n},Y^{n})\mathrm{d}w. \tag{14}\] Generalization error \(G_{n}\) is given by Kullback-Leibler divergence between the true distribution and posterior distribution as follows \[G_{n}=\int q(y|x)q(x)\log\frac{q(y|x)}{p^{*}(y|x)}\mathrm{d}x\mathrm{d}y. \tag{15}\] Average of Generalization error is difference between the average of Free energy of \(n\) and \(n+1\): \[\mathbb{E}[G_{n}]-S=\mathbb{E}[F_{n+1}]-\mathbb{E}[F_{n}], \tag{16}\] where \(\mathbb{E}[f(X^{n},Y^{n})]\) is the average of the generation of n data \(\mathbb{E}_{X^{n},Y^{n}}[f(X^{n},Y^{n})]\). Figure 1: The structure of Convolutional Neural Network with and without Skip Connection ### Asymptotic property of Free energy and Generalization error It is well known that if the average Kullback-Leibler divergence \[K(w)=\int q(y|x)q(x)\log\frac{q(y|x)}{p(y|x,w)}\mathrm{d}x\mathrm{d}y. \tag{17}\] can be approximated by quadratic form, in other words, the Laplace approximation can be applied to the posterior distribution, average of Free energy has the following asymptotic expansion with the number of parameters of learning model\(d\)[8, 9] \[E[F_{n}]=n(S+\mathrm{Bias})+\frac{d}{2}\log n+O(1) \tag{18}\] where \(S\) is entropy of true distribution and Bias is the minimum value of \(K(w)\) for \(w\in W\). The generalization error is calculated from Free energy by using equation(16) [10]: \[E[G_{n}]=\mathrm{Bias}+\frac{d}{2n}+o\left(\frac{1}{n}\right). \tag{19}\] Laplace approximation cannot be applied to the average Kullback-Leibler divergence of hierarchical model such as Gaussian Mixture or neural networks because of the degeneration of Fisher information matrix. In such models, the average of Free energy and Generalization error have the following asymptotic expansions [11] \[E[F_{n}] =n(S+\mathrm{Bias})+\lambda\log n+o(\log n), \tag{20}\] \[E[G_{n}] =\mathrm{Bias}+\frac{\lambda}{n}+o\left(\frac{1}{n}\right), \tag{21}\] where \(\lambda\) is a rational number called Real Log Canonical Threshold(RLCT). In particular, [24] showed that in case \(\mathrm{Bias}=0\) and \(x\) is bounded, when the Deep Neural Network is trained from the data generated from smaller network, \[\lambda\leq\frac{d^{*}}{2} \tag{22}\] where \(d^{*}\leq d\) is the number of parameter of data generating Network. ## 4 Main Theorem In this subsection the main result of this paper is introduced. First, to state the theorem, we define the data generating network. Both in skip connection the data generating network satisfies the following conditions about the number of layers and filters, \[K_{1}^{*}\leq K_{1},K_{2}^{*}\leq K_{2},(H^{*})^{(1)}=H^{(1)}(H^{*})^{(K_{1}) }=H^{(K_{1}+K_{2})} \tag{23}\] and \[H^{(k)} \geq(H^{*})^{(K_{1}^{*})}(K_{1}^{*}+1\leq k\leq K_{1})\] \[H^{(k)} \geq(H^{*})^{(K_{1}+K_{2}^{*})}(K_{1}+K_{2}^{*}+1\leq k\leq K_{1} +K_{2}-1)\] \[H^{(k)} \geq(H^{*})^{(k)}(others). \tag{24}\] Then, we show the main theorem. **Theorem 4.1**.: _(No Skip connection) Assume that the learning machine and the data generating distribution are given by \(p(y|x,w,b)\) and \(q(y|x)=p(y|x,w^{*},b^{*})\) in case without skip connection which satisfy the conditions (23) and (24), and that a training data \(\{(X_{i},Y_{i})\;\;i=1,2,...,n\}\) is independently taken from \(q(x)q(y|x)\). Then the average free energy satisfies the inequality,_ \[\mathbb{E}[F_{n}]\leq nS+\lambda_{CNN}\log n+C \tag{25}\] _where_ \[\lambda_{CNN}=\frac{1}{2}\left(|w^{*}|_{0}+|b^{*}|_{0}+\sum_{k=K_{1}^{*}+1}^{ K_{1}}(9H_{K_{1}^{*}}+1)H_{K_{1}^{*}}\right) \tag{26}\] _where \(|w^{*}|_{0},|b^{*}|_{0}\) are the numbers of parameters of weights and biases in data generating network._ **Theorem 4.2**.: _(Skip connection) Assume that the learning machine and the data generating distribution are given by \(p(y|x,w,b)\) and \(q(y|x)=p(y|x,w^{*},b^{*})\) in case with skip connection which satisfy the conditions (10), (23) and (24), and that a training data \(\{(X_{i},Y_{i})\;\;i=1,2,...,n\}\) is independently taken from \(q(x)q(y|x)\). Then_ \[\lambda_{CNN}=\frac{1}{2}(|w^{*}|_{0}+|b^{*}|_{0}) \tag{27}\] Proof of main theorems are shown in AppendixA. If there exists asymptotic expansion of the generalization error \(\mathbb{E}[G_{n}]\) in theorem4.1 and theorem4.2, that satisfies the following inequality \[\mathbb{E}[G_{n}]\leq\frac{\lambda_{CNN}}{n}+o\left(\frac{1}{n}\right), \tag{28}\] where \[G_{n}=\int q(x)\sum_{i=1}^{H_{K_{1}+K_{2}}}f_{i}^{(K_{1}^{*}+K_{2}^{*})}(w^{*},b^{*},x)\log\frac{f_{i}^{(K_{1}^{*}+K_{2}^{*})}(w^{*},b^{*},x)}{\mathbb{E}_{w,b}[f_{i}^{(K_{1}+K_{2})}(w,b,x)]}\mathrm{d}x \tag{29}\] which corresponds to categorical cross entropy. ## 5 Experiment In this section, we show the result of experiment of synthetic data. ### Methods We prepared the 2-class labeled simple data shown in fig2. The the data is \(x\in R^{4\times 4}\) and the values of each elements are in \((-1,1)\). The average of each element is \(0.5\) or \(-0.5\) and added the truncated normal distribution noise within the interval\((-0.5,0.5)\). The probability of each label of data is \(0.5\). We trained CNN whose number of convolutional layer \(K_{1}=2\) and fully connected layers\(K_{2}=2\) with SGD. The number of filter is \(H_{2}=2\) and the parameters are \(L_{2}\) regularized. We use the trained CNN named "true model" as a data generating distribution. Note that the label of original data fig2 is deterministic but the label of true model is probabilistic. We prepare three learning CNN models. Each number of convolutional layers is \(K_{1}=2,3,4\). Each model has skip connection every one layers or does not have skip connection. The number of filters in each layers is \(H^{(k)}=4\). They have \(K_{2}=2\) fully connected layers. The prior distribution is the Gaussian distribution which covariance matrix is \(10^{4}I\) for weight parameter and \(10^{2}I\) for bias parameter. We train the learning CNN models by using the Langevin dynamics. The learning rate is \(10^{-2}\) and the interval of sampling is \(100\). We use the average of \(1000\) samples of learning CNN models as the average of posterior. We estimate the generalization error by the test error of \(10000\) test data from true model. We trained each learning model \(10\) times and estimated the \(\mathbb{E}[G_{n}]\) from the average of test error. ### Result of experiments Table1 shows the result of the experiment. Test Error shows n times of the average of \(10\) test error in each model and the standard error of them. \(d_{\text{model}}\) is a number of parameters of each model. All the CNN models include the true model, hence the bias is \(0\). Then from equation(21), theoretical upper bound of the generalization error is \(\lambda_{\text{CNN}}/n\). In table1, the experimental values of all models are smaller than \(d_{\text{model}}/2\). Moreover in case with skip connection, the experimental value did not so increase with the increase of the number of layer. Then, in case \(K_{1}=4\) without skip connection, the experimental value increased from \begin{table} \begin{tabular}{c c c c} \hline model & \(n\times\) Test Error & \(\lambda_{\text{CNN}}\) & \(d_{\text{model}}/2\) \\ \hline \hline \(K_{1}=2\) & 16.0(1.9) & 13 & 25 \\ \(K_{1}=3\) no skip & 10.0(0.9) & 32 & 99 \\ \(K_{1}=4\) no skip & 58.4(2.3) & 51 & 173 \\ \(K_{1}=3\) with skip & 11.4(2.3) & 13 & 99 \\ \(K_{1}=4\) with skip & 15.6(1.2) & 13 & 173 \\ \hline \end{tabular} \end{table} Table 1: Experimental value and theoretical upper bound of the generalization error Figure 2: The average of input x of each label the case\(K_{1}=2\). In case \(K_{1}=3\) without skip connection, the experimental value is smaller than that of \(K_{1}=2\). Behavior of MCMC is considered to be the cause of this result. Since MCMC in high dimensional model needs the long series for convergence in general, the result is deviated from theoretical predict. ## 6 Discussion ### Difference with or without Skip Connection In this paper for analyzing the overparametrized CNN, the data generating network is smaller than learning network both case of Skip Connection. Nevertheless two cases of the data generating network is different, if the learning model network has double filter \(H^{(k)}\) to the data generating network in each Convolutional Layer, the model network can represent the generating network in different case. The output of each layer is nonnegative hence the model can represent the skip connection or the negative of that. If the model network doesn't have larger layer to the data generating network, the free energy of CNN with skip connection can be both larger or smaller than that without skip connection by the data generating network. Then, the layer of model network gets larger, the free energy of CNN with skip connection does not change but that without skip connection gets larger and the free energy of CNN with skip connection comes to have smaller free energy for all data generating network. ### Comparison to Deep Neural Network Firstly we compare the result of this paper to that of DNN in [24]. In case of DNN, the free energy depends on the layers of the model and only on that of the data generating network. This stands to the reason that mapping of the linear transformation in lower layer can be represented in higher layer. On the other hand, convolution operation doesn't have such property hence, the free energy of CNN without skip connection depends on the layer of learning model network. However, with skip connection, there exists the essential parameter which doesn't depend on overparametrized layers and the free energy does not also depend on the layer of learning model network. ## 7 Conclusion In this paper, we studied Free energy of Bayesian Convolutinal Neural Network with Skip Connection and compared to the case without Skip Connection. Free energy of Bayesian CNN with Skip Connection doesn't depend on the layer of the model unlike the case without Skip Connection. In Bayesian learning, the increase of Free energy is equivalent to generalization error, hence the generalization error has same property about the Skip Connection. In particular, Free energy of CNN without skip connection does not depends on the number of parameters in learning network but depends only on that in data generating network. This feature shows the generalization ability of CNN with skip connection does not decrease with respect to any overparameterization in Bayesian learning.
2307.05881
tdCoxSNN: Time-Dependent Cox Survival Neural Network for Continuous-time Dynamic Prediction
The aim of dynamic prediction is to provide individualized risk predictions over time, which are updated as new data become available. In pursuit of constructing a dynamic prediction model for a progressive eye disorder, age-related macular degeneration (AMD), we propose a time-dependent Cox survival neural network (tdCoxSNN) to predict its progression using longitudinal fundus images. tdCoxSNN builds upon the time-dependent Cox model by utilizing a neural network to capture the non-linear effect of time-dependent covariates on the survival outcome. Moreover, by concurrently integrating a convolutional neural network (CNN) with the survival network, tdCoxSNN can directly take longitudinal images as input. We evaluate and compare our proposed method with joint modeling and landmarking approaches through extensive simulations. We applied the proposed approach to two real datasets. One is a large AMD study, the Age-Related Eye Disease Study (AREDS), in which more than 50,000 fundus images were captured over a period of 12 years for more than 4,000 participants. Another is a public dataset of the primary biliary cirrhosis (PBC) disease, where multiple lab tests were longitudinally collected to predict the time-to-liver transplant. Our approach demonstrates commendable predictive performance in both simulation studies and the analysis of the two real datasets.
Lang Zeng, Jipeng Zhang, Wei Chen, Ying Ding
2023-07-12T03:03:40Z
http://arxiv.org/abs/2307.05881v2
# Dynamic Prediction using Time-Dependent Cox Survival Neural Network ###### Abstract The target of dynamic prediction is to provide individualized risk predictions over time which can be updated as new data become available. Motivated by establishing a dynamic prediction model for the progressive eye disease, age-related macular degeneration (AMD), we proposed a time-dependent Cox model-based survival neural network (tdCoxSNN) to predict its progression on a continuous time scale using longitudinal fundus images. tdCoxSNN extends the time-dependent Cox model by utilizing a neural network to model the non-linear effect of the time-dependent covariates on the survival outcome. Additionally, by incorporating the convolutional neural network (CNN), tdCoxSNN can take the longitudinal raw images as input. We evaluate and compare our proposed method with joint modeling and landmarking approaches through comprehensive simulations using two time-dependent accuracy metrics, the Brier Score and dynamic AUC. We applied the proposed approach to two real datasets. One is a large AMD study, the Age-Related Eye Disease Study (AREDS), in which more than 50,000 fundus images were captured over a period of 12 years for more than 4,000 participants. Another is a public dataset of the primary biliary cirrhosis (PBC) disease, in which multiple lab tests were longitudinally collected to predict the time-to-liver transplant. Our approach achieves satisfactory prediction performance in both simulation studies and the two real data analyses. tdCoxSNN was implemented in PyTorch, Tensorflow, and R-Tensorflow. Cox model; dynamic prediction; neural network; survival analysis; time-dependent covariate + Footnote †: This paper has been submitted for consideration in _XXX_ ## 1 Introduction For many chronic progressive diseases, the prognosis and severity of the disease change over time. A dynamic prediction model that can forecast the longitudinal disease progression profile is a crucial and unmet need (Jenkins et al. 2018). The unstructured observation times and the varying number of observations across subjects make it challenging to build a dynamic prediction model. The collection of high-dimensional longitudinal data requires the development of novel dynamic prediction models which can handle various inputs such as images. Joint modeling and landmarking are the two dominating techniques for dynamic prediction. The first prediction approach jointly models the longitudinal and time-to-event data through a longitudinal sub-model and a survival sub-model (Rizopoulos 2011). However, joint modeling is computationally demanding (Rizopoulos et al. 2017) and struggles to directly model the large-scale longitudinal data. In contrast, landmarking is a more pragmatic model which avoids directly modeling the process for the longitudinal covariates. It estimates the effect of predictors through the survival model over all subjects at risk at a given landmark time point (Van Houwelingen 2007). Suresh et al. (2017) and Rizopoulos et al. (2017) compared the prediction accuracy of two models and found that joint modeling performs better than landmarking when the longitudinal process is correctly modeled. In the cases that the longitudinal process is misspecified or difficult to estimate, such as with sparse longitudinal data, the landmarking method provided a good enough prediction (Ferrer et al. 2019). Recently, there have been extensions to joint modeling methods aimed at addressing the nonlinear patterns in longitudinal outcomes. Li et al. (2022) proposed the functional JM model to model the multiple longitudinal outcomes as multivariate sparse functional. Zou et al. (2023) applied the functional JM model to predict the progression of Alzheimer's disease using pre-specified MRI voxels. However, this approach heavily relies on image registration/pre-processing and disregards the correlation between voxels. With the development of machine learning and its success in survival analysis (Katzman et al., 2018; Lee et al., 2018; Kvamme et al., 2019), new methods have been developed to integrate the dynamic prediction models with machine learning techniques to expand their application and enhance the prediction accuracy in more complex settings. Lin and Luo (2022) proposed using a neural network to jointly model the survival and longitudinal process. Tanner et al. (2021) combined the landmarking with machine learning ensembles to integrate prediction from standard methods. However, these two approaches can not be directly applied to the situation with high-dimensional longitudinal variables. Lee et al. (2019); Jarrett et al. (2019); Nagpal et al. (2021) proposed different deep-learning models for dynamic prediction under the discrete-time scenario. Although discretizing the time does not necessarily diminish prediction accuracy, the number of time intervals used for discretization significantly impacts accuracy so needs to be carefully tuned in practice (Sloma et al., 2021). In summary, no existing dynamic prediction models could directly handle the high-dimensional, longitudinal variables collected at irregular observational times. The time-dependent Cox model (Fisher et al., 1999; Thomas and Reyes, 2014) is a straightforward continuous-time method used to incorporate the relationship between the longitudinal and time-to-event processes. This approach has received numerous criticisms as it may not accurately reflect the longitudinal process (Sweeting and Thompson, 2011). Nonetheless, we discovered that it can be easily combined with neural network techniques to create a dynamic prediction method capable of handling complex longitudinal markers (e.g. images) and their nonlinear relationship with survival outcomes. This paper proposes a dynamic prediction method on a continuous time scale when the longitudinal covariates could be high-dimensional and measured at unstructured time points. Specifically, we combined the time-dependent Cox model with the Cox survival neural network. The proposed method can incorporate structured inputs (e.g. images, texts) through an additional neural network architecture. The rest of the article is organized as follows: Section 2 describes the AREDS dataset which highly motivates this study. Section 3 introduces notation and the standard dynamic prediction techniques. Section 4 presents the proposed model. Section 5 introduces the accuracy metrics for the evaluation of prediction performance. Section 6 and 7 presents the simulation and two real data analysis results. Finally, we conclude with a discussion in Section 8. ## 2 AMD progression prediction and existing works This research is highly motivated by developing a dynamic prediction model for progressive eye disease, Age-related Macular Degeneration (AMD), using longitudinal fundus images. AMD is a polygenic and progressive neurodegenerative eye disease, which is a leading cause of blindness in the older population, especially in developed countries. It has been reported that by 2040, AMD is going to affect about 288 million people worldwide Peng et al. (2020). Once the disease is in the late stage, it is typically not curable. Therefore, accurate predictive models for predicting the risk of progressing to late-AMD at an early stage are needed. This will allow clinicians to identify high-risk individuals for late-AMD at their subclinical stage so that they can initiate preventive interventions for those individuals. Colored fundus photographs have been routinely used to examine and document the presence and severity of AMD in clinical practice and trials. Our motivating study is the Age-related Eye Disease Study (AREDS), which is a large multi-center, controlled, randomized clinical trial of AMD and age-related cataract (Group 1999). It was designed to assess the clinical course and risk factors related to the development and progression of AMD and cataract. The participants were followed up for 12 years and the fundus photographs were performed every six months during the first six years, and annually thereafter. Many models have been proposed in recent years to characterize and predict AMD progression. Sun et al. (2020) built a survival neural network prediction model to predict the progression risk with the baseline demographic and genotype data. Yan et al. (2020) used both genotype and fundus image data to predict the risk of progression to late-AMD at discretized given time points, where images were processed through the convolutional neural network (CNN). Peng et al. (2020) used the deep features of baseline fundus images obtained from DeepSeeNet (Peng et al., 2019) along with demographic and genotype data to generate individualized progression curves. Ghahramani et al. (2021) used the deep features of the fundus images from baseline, year 2, and 3 through a recurrent neural network (RNN). Ganjdanesh et al. (2022)trained a image generation model on all consecutive time-points data to predict the fundus image at next visit. These prediction models either rely solely on baseline predictors or predictors from specific years. ## 3 Notation and existing approaches for dynamic prediction ### Notations Let \(\{T_{i},\delta_{i},\{\mathcal{Y}_{i}(t),0\leqslant t\leqslant T_{i}\},X_{i};i =1,\ldots,n\}\) denote \(n\) samples. \(T_{i}=\min(T_{i}^{*},C_{i}^{*})\) denotes the observed event time for subject \(i\), with \(T_{i}^{*}\) and \(C_{i}^{*}\) denoting the underlying true event time and censoring time, and \(\delta_{i}=I(T_{i}^{*}\leqslant C_{i}^{*})\) is the event indicator. \(X_{i}\) is time-inarying measurement for subject \(i\). \(\{\mathcal{Y}_{i}(t),0\leqslant t\leqslant s\}\) is the measurements in the interval \([0,s]\). It's often the case that the history can not be fully measured and we only observed \(\{\mathcal{Y}_{i}(0),\mathcal{Y}_{i}(t_{i1}),\ldots,\mathcal{Y}_{i}(t_{in_{i} }),t_{in_{i}}\leqslant s\}\). We focus on the scenario where \(t\) is continuous and allow the number of longitudinal observations \(n_{i}+1\) to differ across individuals. For dynamic prediction, we are interested in predicting the probability that a new patient \(j\), with time-varying measurements up to time \(s\), will survive up to time \(u\) for \(u>s\), denoted as \(\pi_{j}(u|s)=Pr(T_{j}^{*}>u|T_{j}^{*}>s,\{\mathcal{Y}j(t),0\leqslant t \leqslant s\},X_{j})\). In contrast to static predictions, dynamic models allow predictions to be updated and obtain \(\hat{\pi}_{j}(u|s^{\prime})\) when new information is available at a new time \(s^{\prime}>s\). ### Existing approaches Joint modeling is a popular approach for dynamic prediction. It consists of a sub-model for the longitudinal process and a sub-model for the survival process linked through the shared random effects (Tsiatis and Davidian 2004) or some functional forms of the longitudinal outcomes Li and Luo (2019); Mauff et al. (2020). The estimation of the joint modeling is performed by the Bayesian approach (Rizopoulos 2011). It is usually computationally expensive and especially challenging with high-dimensional longitudinal features such as images. After estimating the parameters, the prediction of the survival probability \(\hat{\pi}_{j}(u|s)\) is obtained from the posterior predictive distribution of the survival process (Rizopoulos 2014). Another popular dynamic prediction approach is the landmarking method (LM). Different from joint modeling, LM does not model the longitudinal process. It predicts \(\pi_{j}(u|s)\) through a model fitted based on subjects still at risk at time \(s\), which is called the landmark time. And the administrative censoring at \(u\) will be applied, hence the estimated effect (\(\hat{\beta}_{LM(s,u)}\)) of the predictors at landmark time can approximate the effect of the time-varying covariates on survival outcome within the time window \((s,u]\) (Van Houwelingen 2007). The predictors at landmark time \(s\) is usually a summary of the history. For example, one may use the mean or maximum of \(\{\mathcal{Y}_{j}(t),0\leqslant t\leqslant s\}\) as a summary. The choice of the time-dependent covariate depends on the research problem and is discussed in Fisher et al. (1999). The idea of generating a summarized time-dependent covariate can also be applied to JM. For simplicity, we used the instantaneous measurements \(\mathcal{Y}_{i}(s)\) as time-varying covariates in this work. ### Using time-dependent Cox model for dynamic prediction Before we dive into the technical details, we want to highlight the difference between the time-dependent Cox method and the standard LM. The standard LM directly models the bilateral relationship between predictor variables at landmarking time \(s\) and the corresponding residual survival time with administrative censoring time \(u\) through a survival model (e.g. time-independent Cox model), with all observations after \(s\) disregarded. Both \(s\) and \(u\) need to be prespecified which makes the model less flexible. Moreover, using \(\hat{\beta}_{LM(s,u)}\) to approximate the true effect of time-dependent covariates can be inaccurate when \(u\) is large. With \(u\) more away from \(s\), the effect estimated in LM attenuates (Van Houwelingen, 2007; Putter and van Houwelingen, 2017) compared to the estimation from the time-dependent Cox model which considers the observations after landmarking time. For those reasons, we will not consider LM + the time-independent models such as (Van Houwelingen, 2007) and Pickett et al. (2021) in our simulations and analyses. Instead, following the spirit of LM, we fitted the time-dependent Cox model on the representative subjects with a given landmark time to evaluate the performance of the LM + time-dependent Cox model when we do have a landmark time of interest in simulation 1 (section 6.2.1). Using the time-dependent Cox model is a trade-off between JM and the standard LM. Similar to JM, it fully uses the available longitudinal variables and is flexible (without pre-specifying landmark time \(s\) and administrative censoring time \(u\)), while keeping the simplicity of LM (without modeling the longitudinal process). We propose to construct a survival neural network under the time-dependent Cox model to incorporate the non-linear effect of the time-varying predictors, and can further use high-dimensional longitudinal predictors (e.g. image) as input. ## 4 Dynamic prediction using time-dependent Cox survival neural network ### Cox model with time-dependent covariates The time-dependent Cox model takes the form \[h_{i}(t)=h_{0}(t)\exp[g_{\beta,\gamma}(X_{i},\mathcal{Y}_{i}(t))]=h_{0}(t)\exp[ \beta^{T}X_{i}+\gamma^{T}\mathcal{Y}_{i}(t)]\] and estimate \((\beta,\gamma)\) through the partial likelihood \(pL=\prod_{i}^{n}[\frac{\exp[g_{\beta,\gamma}(X_{i},\mathcal{Y}_{i}(T_{i}))]}{ \sum_{j:T_{j}\geqslant T_{i}}\exp[g_{\beta,\gamma}(X_{j},\mathcal{Y}_{j}(T_{i}) )]-E_{i}(g_{\beta,\gamma})}]^{\delta_{i}}\). With Efron's approximation for handling the tied events (Efron, 1977), we have \(E_{i}(g)=\frac{\sum_{j}1(j>i,T_{j}=T_{i})}{\sum_{j}1(T_{j}=T_{i})}\sum_{j:T_{j} =T_{i}}\exp[g(X_{j},\mathcal{Y}_{j}(T_{i}))]\). In the partial likelihood for the time-dependent Cox model, at each event time \(T_{i}\), \(\mathcal{Y}(T_{i})\) is required for all the subjects still at risk at \(T_{i}\), which is not always available. Therefore, interpolation between longitudinal measurements is required. The last observation carried forward (LOCF) method is commonly used (Thomas and Reyes, 2014; Therneau et al., 2017), which assumes the values of the longitudinal predictors stay constant until the next measurement is available. To predict \(\pi_{j}(u|s)\), similarly, we assume \(\mathcal{Y}_{j}(t)=\mathcal{Y}_{j}(s)\) for all \(s<t\leqslant u\). Therefore, the predicted survival probability is \[\hat{\pi}_{j}(u|s)=\exp\{-(\hat{H}_{0}(u)-\hat{H}_{0}(s))\exp[g_{\hat{\beta}, \hat{\gamma}}(X_{j},\mathcal{Y}_{j}(s))]\}, \tag{4.1}\] where \(\hat{H}_{0}(t)=\sum_{i=1}^{n}\frac{I(T_{i}\leqslant t)\delta_{i}}{\sum_{j:T_{ j}\geqslant T_{i}}\exp[g_{\beta,\hat{\gamma}}(X_{j},\mathcal{Y}_{j}(T_{i}))]-E_{i}(g_{ \beta,\hat{\gamma}})}\) is the Breslow estimator of the cumulative baseline hazard function (Breslow, 1972; Lin, 2007) with Efron's approximation. ### Cox survival neural network with time-dependent covariates To establish a dynamic prediction model using high-dimensional image data, we consider the use of the neural network to augment the time-dependent Cox model. A neural network is an architecture that models the relationship between the input \(\mathbf{x}\in\mathbb{R}^{p_{0}}\) and output \(f(\mathbf{x})\in\mathbb{R}^{p_{L+1}}\) through the recursive layer structures \(f(\mathbf{x})=W_{L}\times\sigma_{\mathbf{V}_{L}}\left(W_{L-1}\times\sigma_{ \mathbf{V}_{L-1}}(\ldots W_{1}\times\sigma_{\mathbf{V}_{1}}(W_{0}\times \mathbf{x}))\right).\)\(L\) is the total number of hidden layers (depth of neural network) with \(p_{k}\) nodes (width) in each layer \(k\) (\(k=1,\ldots,L\)). The activation function \(\sigma\) with the shift vector \(\mathbf{V}_{k}\) is a nonlinear transformation that operates componentwise \(\sigma_{\mathbf{V}_{i}}((y_{1},\ldots,y_{p_{k}})^{T})=(\sigma(y_{1}-v_{1}), \ldots,\sigma(y_{p_{k}}-v_{p_{k}}))^{T}\). The depth \(L\), width \(\mathbf{p}=(p_{1},\ldots,p_{L})\), and activation function \(\sigma\) should be pre-specified before the model fitting. The weight matrix \(W_{k}\in p_{k+1}\times p_{k}\) and the shift vector \(\mathbf{V}_{k}\in\mathbb{R}^{p_{k}}\) are the parameters that will be estimated by minimizing the loss function. We apply the neural network to model the nonlinear effect of time-dependent covariates and incorporate the high-dimensional longitudinal data. Instead of assuming the unknown risk score function \(g_{0}(X_{i},\mathcal{Y}_{i}(t))\) takes the linear form \(g_{\beta,\gamma}(X_{i},\mathcal{Y}_{i}(t))=\beta^{T}X_{j}+\gamma^{T}\mathcal{ Y}_{j}(t)\), we leave the form of \(g_{0}(X_{i},\mathcal{Y}_{i}(t))\) unspecified and model it through a neural network \(g_{\theta}(X_{i},\mathcal{Y}_{i}(t))\) parameterized by the weight matrixes and shift vectors of the neural network \(\theta=(\mathbf{W},\mathbf{V})\). The time-dependent Cox survival neural network (Cox SNN) is a feed-forward neural network that models the effect of time-dependent covariates on their hazard function. The input is the covariates \((X_{i},\mathcal{Y}_{i}(t))\) and the output \(g_{\theta}(X_{i},\mathcal{Y}_{i}(t))\) is a single node with linear activation function so that \(g_{\theta}(X_{i},\mathcal{Y}_{i}(t))\in\mathbb{R}\) (Figure 1a). \(\theta\) is estimated by minimizing the negative log partial likelihood \(\hat{\theta}=\arg\min[l(\theta|g_{\theta})]\), where \[l(\theta|g_{\theta})=-\frac{1}{n}\sum_{i=1}^{n}\delta_{i}\left[g_{\theta}(X_{ i},\mathcal{Y}_{i}(T_{i}))-\log\left(\sum_{j:T_{j}\geqslant T_{i}}\exp\{g_{ \theta}(X_{j},\mathcal{Y}_{j}(T_{i}))\}-E_{i}(g_{\theta})\right)\right]. \tag{4.2}\] Following the prediction formula (4.1) for the time-dependent Cox model, the predicted probability under the Cox SNN is given by \[\hat{\pi}_{j}(u|s)=\exp\{-(\hat{H}_{0}(u)-\hat{H}_{0}(s))\exp[g_{\tilde{\theta }}(X_{j},\mathcal{Y}_{j}(s))]\}, \tag{4.3}\] where \(\hat{H}_{0}(t)=\sum_{i=1}^{n}\frac{I(T_{i}\leqslant t)\delta_{i}}{\sum_{j:T_{ j}\geqslant T_{i}}\exp[g_{\tilde{\theta}}(X_{j},\mathcal{Y}_{j}(T_{i}))]-E_{i}(g_{ \tilde{\theta}})}\). \(\hat{\pi}_{j}\) can be updated when new information of subject \(j\) is available (Figure 1b). The estimator of \(\theta\) which can minimize the loss (4.2) is not unique. For a given minimizer \(\hat{\theta}\), it is able to find \(\tilde{\theta}\) such that \(g_{\tilde{\theta}}:=g_{\tilde{\theta}}+c\). Note that \(\tilde{\theta}\) is also a minimizer since \(l(\tilde{\theta}|g_{\theta})=l(\hat{\theta}|g_{\theta})\). However, the constant shift \(c\) of the estimated hazard function will not change the predicted probability in (4.3) (as it appears both in the numerator and the denominator). Therefore, \(\hat{\pi}_{j}(u|s)\) in (4.3) is robust to a constant shift of the neural network output. Besides the ability to model the nonlinear effects, another benefit of using the neural network structure is that a pre-trained neural network, which can process a specific data type such as image or text, can be readily combined with SNN (transfer learning). For example, ResNet50 (He et al., 2016) is a CNN for image classification whose weights have been trained over one million images. Adding the pre-trained CNN on the top of SNN allows SNN to take the raw images as input, which is the approach we take for this study. ## 5 Prospective accuracy metrics Methods for assessing the predictive performance of survival models concentrates either on calibration, i.e., how well the model predicts the observed data, or on discrimination, i.e., how well the model discriminates between subjects with the event from subjects without. We consider both calibration and discrimination metrics for model performance evaluation under the time-dependent setting. Similar to previous studies, we compare dynamic prediction models on a time-window \((t,t+\Delta t]\) where the landmark time \(t\) and the length of time window \(\Delta t\) are pre-specified (Rizopoulos, 2011; Rizopoulos et al., 2017; Tanner et al., 2021). This is to evaluate, for subjects survived to time \(t\) on a separate test data with time-dependent covariates \(\mathcal{Y}_{i}(t)\) up to time \(t\) collected, how well the predicted \(\hat{\pi}_{i}(s|t)\) (\(t<s<t+\Delta t\)) agrees with the observed data. The censoring-free probability \(G(t)=P(C>t)\) is used to account for the censoring as a weight in the metric. ### Calibration metric: time-dependent Brier Score The time-dependent Brier Score measures the mean squared error between the observed survival status and the predicted survival probability weighted by the inverse probability of censoring (IPCW) (Gerds and Schumacher, 2006). A lower Brier Score indicates a higher prediction accuracy. For a given landmark time \(t\), the estimated Brier Score at time \(t+\Delta t\) is \(\hat{B}S(t,\Delta t;\hat{\pi})=\frac{1}{\sum_{i}1(T_{i}>t)}\sum_{i=1}^{n} \left(\mathbbm{1}(T_{i}>t)\hat{W}_{i}(t,\Delta t)\{1(T_{i}>t+\Delta t)-\hat{ \pi}_{i}(t+\Delta t|t)\}^{2}\right),\) where \(\hat{W}_{i}(t,\Delta t)=\{\frac{1(T_{i}>t+\Delta t)}{\hat{G}(t+\Delta t|t)}+ \frac{1(T_{i}\leqslant t+\Delta t)\delta_{i}}{\hat{G}(T_{i}^{-}|t)}\}\) is the IPCW weight and \(\hat{G}(s|t)=\frac{\hat{G}(s)}{\hat{G}(t)}\) is the Kaplan-Meier estimate of the conditional censoring distribution. ### Discrimination metric: time-dependent AUC The area under the receiver operating characteristic curve (AUC) measures the discrimination of the prediction model. It ranges from 0 to 1 with 0.5 indicating that the discriminability is no better than random guessing and AUC further away from 0.5 suggesting better discrimination. We use cumulative sensitivity and dynamic specificity AUC (cdAUC) (Kamarudin et al., 2017) to evaluate the discrimination performance of the models at different time points. Given a threshold \(b\) and predictor \(X\), the cumulative sensitivity is defined as \(Se^{C}(b,\Delta t)=P(X_{i}>b|T_{i}\leqslant\Delta t)\) while the dynamic specificity is \(Sp^{D}(b,\Delta t)=P(X_{i}\leqslant b|T_{i}>\Delta t)\). The term "cumulative" is to differentiate this sensitivity from the incident sensitivity \(Se^{I}(b,\Delta t)=P(X_{i}>b|T_{i}=\Delta t)\) which assesses the sensitivity for the population whose survival time exactly equals \(\Delta t\). With the cumulative sensitivity and the dynamic specificity, the corresponding cdAUC\((t,\Delta t)=P(X_{i}>X_{j}|t<T_{i}\leqslant\Delta t,T_{j}>\Delta t)\), \(i\neq j\). Specifically, for a given time interval \((t,t+\Delta t]\), the IPCW estimator of cdAUC is \[\text{cd}A\hat{U}C(t,\Delta t)=\frac{\sum_{i=1}^{n}\sum_{j=1}^{n} \mathbbm{1}_{(\hat{\pi}_{i}(t+\Delta t|t)<\hat{\pi}_{j}(t+\Delta t|t))}\delta _{i}\mathbbm{1}_{(t<T_{i}\leqslant t+\Delta t)}\mathbbm{1}_{(T_{j}>t+\Delta t )}W_{i}(t,\Delta t)W_{j}(t,\Delta t)}{\sum_{i=1}^{n}\sum_{j=1}^{n}\delta_{i} \mathbbm{1}_{(t<T_{i}\leqslant t+\Delta t)}\mathbbm{1}_{(T_{j}>t+\Delta t)}W_ {i}(t,\Delta t)W_{j}(t,\Delta t)}.\] It computes the IPCW weighted percentage of the comparable subject pairs \((i,j)\) where their predicted survival probabilities are consistent with their observed data for the given time interval \((t,t+\Delta t]\). The comparable pair \((i,j)\) is two subjects in which the subject \(i\) experiences the event within the time interval \((t,t+\Delta t]\) and subject \(j\) is event-free by \(t+\Delta t\). ## 6 Numerical implementation and simulations ### Numerical implementation For the time-dependent Cox SNN, we implemented the log partial likelihood function with Efron tie approximation (4.2) using Tensorflow (Abadi et al., 2016). It is implemented through matrix operations so the calculation is fast (details can be found in the Appendix). The PyTorch (Paszke et al., 2019) and R-Tensorflow (Allaire and Tang, 2019) version can also be found at [https://github.com/langzeng/tdCoxSNN](https://github.com/langzeng/tdCoxSNN). The neural network was optimized through the Adam optimizer (Kingma and Ba, 2014). We used the following survival neural network structure in all simulations and real data analysis: input layer \(\rightarrow\) hidden layer \(\rightarrow\) batch normalization layer \(\rightarrow\) dropout layer \(\rightarrow\) output layer. The batch normalization layer (Ioffe and Szegedy, 2015) accelerates the neural network training and the dropout layer (Srivastava et al., 2014) protects the neural network model from over-fitting. Hyper-parameters were also fixed in all analyses: 30 nodes in the hidden layer, Scaled Exponential Linear Unit (SeLU) as the hidden layer activation function, batch size 50, epoch size 20, learning rate 0.01, and dropout rate 0.2. ### Simulation #### 6.2.1 Simulation 1: Low dimensional predictors We carried out simulation studies to empirically compare the dynamic prediction performance of the proposed time-dependent Cox SNN with the time-dependent Cox model and joint modeling in a low-dimensional setting. Data were generated through the joint models. For all simulations, for sample \(i\), one-dimensional longitudinal covariate \(\mathcal{Y}_{i}(t)\) was generated through \(\mathcal{Y}_{i}(t)=y_{i}(t)+\epsilon\) where \(y_{i}(t)\) is the true longitudinal trajectory over time and \(\epsilon\sim N(0,0.3^{2})\) represents a measurement error. We considered the true value of time-varying covariate to be given by \[y_{i}(t)=\beta_{0}+\beta_{1}t+\beta_{2}t^{2}+b_{i0}+b_{i1}t+b_{i2}t^{2}\text{ and }\mathbf{b}_{i}=(b_{i0},b_{i1},b_{i2})^{T}\sim N(0,\Sigma_{3\times 3})\] where \(\beta_{0}=3.2,\beta_{1}=-0.07\). \(\Sigma\) denotes a 3 by 3 inter-subject variance matrix with \(\Sigma_{11}=1.44,\Sigma_{22}=0.6\) and we assume the covariances \(\Sigma_{ij}\) are zero in all simulations. \((\beta_{2},\Sigma_{33})\) capture the non-linearity of the trajectory and we considered the trajectory of the longitudinal measurement to be linear (\(\beta_{2}=0,\Sigma_{22}=0\)) or nonlinear (\(\beta_{2}=0.004,\Sigma_{22}=0.09\)). \(y_{i}(t)\) was measured regularly per time unit at \(t=0,1,2,\ldots,14\). The survival time \(T_{i}^{*}\) was obtained through a Weibull model \(h_{i}(t)=\lambda\rho t^{\rho-1}\exp\{g(X_{i},y_{i}(t))\}\) with \(\rho=1.4\) and \(\lambda=0.1\). The censoring time \(C_{i}^{*}\) was generated through an exponential distribution \(\exp(\frac{2}{14})\). The observed survival time \(T_{i}=\min(T_{i}^{*},C_{i}^{*})\) and the event indicator \(\delta_{i}=1\left(T_{i}^{*}\leqslant C_{i}^{*}\right)\) were then calculated. Longitudinal values \(\mathcal{Y}_{i}(t)\) with \(t\geqslant T_{i}\) were disregarded, only \(\mathcal{Y}_{i}(t)\) measured before \(T_{i}\) were kept as the observed longitudinal measurements for subject \(i\). The baseline covariates \(x_{k}\) (\(k=1,2,3,4\)) were independently generated from the continuous uniform distribution on \([-0.5,1.5]\). Models were fitted using variables \((x_{1},x_{2},x_{3},x_{4},\mathcal{Y}_{i}(t))\). We considered four different cases (see below) with the linear or nonlinear risk function \(g(X,y(t))\) and the linear or quadratic trend (in time) of longitudinal measurement \(y(t)\). We added the intercept term -10 to make the censoring rate close to that of AREDS data (around 80%) in each simulation. \(\bullet\): Case 1: \(\begin{cases}g(X,y(t))=x_{1}+2x_{2}+3x_{3}+4x_{4}+0.3y(t)-10\\ y(t)=(3.2+b_{0})-(0.07+b_{1})t\end{cases}\)\(\bullet\): Case 2: \(\begin{cases}g(X,y(t))=x_{1}+2x_{2}+3x_{3}+4x_{4}+0.3y(t)-10\\ y(t)=(3.2+b_{0})-(0.07+b_{1})t+(0.004+b_{2})t^{2}\end{cases}\)\(\bullet\): Case 3: \(\begin{cases}g(X,y(t))=(\{x_{1}^{2}x_{2}^{3}+\log(x_{3}+1)+(0.3y(t)x_{4}+1)^{\frac{1} {3}}+\exp(\frac{x_{4}}{2})+0.3y(t)\}^{2}/3)-10\\ y(t)=(3.2+b_{0})-(0.07+b_{1})t\end{cases}\)\(\bullet\): Case 4: \(\begin{cases}g(X,y(t))=(\{x_{1}^{2}x_{2}^{3}+\log(x_{3}+1)+(0.3y(t)x_{4}+1)^{ \frac{1}{3}}+\exp(\frac{x_{4}}{2})+0.3y(t)\}^{2}/3)-10\\ y(t)=(3.2+b_{0})-(0.07+b_{1})t\end{cases}\)\(\bullet\): Case 5: \(\begin{cases}g(X,y(t))=(\{x_{1}^{2}x_{2}^{3}+\log(x_{3}+1)+(0.3y(t)x_{4}+1)^{ \frac{1}{3}}+\exp(\frac{x_{4}}{2})+0.3y(t)\}^{2}/3)-10\\ y(t)=(3.2+b_{0})-(0.07+b_{1})t\end{cases}\)\(\bullet\): Case 6: \(\begin{cases}g(X,y(t))=(\{x_{1}^{2}x_{2}^{3}+\log(x_{3}+1)+(0.3y(t)x_{4}+1)^{ \frac{1}{3}}+\exp(\frac{x_{4}}{2})+0.3y(t)\}^{2}/3)-10\\ y(t)=(3.2+b_{0})-(0.07+b_{1})t\end{cases}\)\(\bullet\): Case 7: \(\begin{cases}g(X,y(t))=(\{x_{1}^{2}x_{2}^{3}+\log(x_{3}+1)+(0.3y(t)x_{4}+1)^{ \frac{1}{3}}+\exp(\frac{x_{4}}{2})+0.3y(t)\}^{2}/3)-10\\ y(t)=(3.2+b_{0})-(0.07+b_{1})t\end{cases}\)\(\bullet\): Case 8: \(\begin{cases}g(X,y(t))=(\{x_{1}^{2}x_{2}^{3}+\log(x_{3}+1)+(0.3y(t)x_{4}+1)^{ \frac{1}{3}}+\exp(\frac{x_{4}}{2})+0.3y(t)\}^{2}/3)-10\\ y(t)=(3.2+b_{0})-(0.07+b_{1})t\end{cases}\)\(\bullet\): Case 9: \(\begin{cases}g(X,y(t))=(\{x_{1}^{2}x_{2}^{3}+\log(x_{3}+1)+(0.3y(t)x_{4}+1)^{ \frac{1}{3}}+\exp(\frac{x_{4}}{2})+0.3y(t)\}^{2}/3)-10\\ y(t)=(3.2+b_{0})-(0.07+b_{1})t\end{cases}\)\(\bullet\): Case 10: \(\begin{cases}g(X,y(t))=(\{x_{1}^{2}x_{2}^{3}+\log(x_{3}+1)+(0.3y(t)x_{4}+1)^{ \frac{1}{3}}+\exp(\frac{x_{4}}{2})+0.3y(t)\}^{2}/3)-10\\ y(t)=(3.2+b_{0})-(0.07+b_{1})t\end{cases}\)\(\bullet\): Case 11: \(\begin{cases}g(X,y(t))=(\{x_{1}^{2}x_{2}^{3}+\log(x_{3}+1)+(0.3y(t)x_{4}+1)^{ \frac{1}{3}}+\exp(\frac{x_{4}}{2})+0.3y(t)\}^{2}/3)-10\\ y(t)=(3.2+b_{0})-(0.07+b_{1})t\end{cases}\)\(\bullet\): Case 12: \(\begin{cases}g(X,y(t))=(\{x_{1}^{2}x_{2}^{3}+\log(x_{3}+1)+(0.3y(t)x_{4}+1)^{ \frac{1}{3}}+\exp(\frac{x_{4}}{2})+0.3y(t)\}^{2}/3)-10\\ y(t)=(3.2+b_{0})-(0.07+b_{1})t\end{cases}\)\(\bullet\): Case 13: \(\begin{cases}g(X,y(t))=(\{x_{1}^{2}x_{2}^{3}+\log(x_{3}+1)+(0.3y(t)x_{4}+1)^{ \frac{1}{3}}+\exp(\frac{x_{4}}{2})+0.3y(t)\}^{2}/3)-10\\ y(t)=(3.2+b_{0})-(0.07+b_{1})t\end{cases}\)\(\bullet\): Case 14: \(\begin{cases}g(X,y(t))=(\{x_{1}^{2}x_{2}^{3}+\log(x_{3}+1)+(0.3y(t)x_{4}+1)^{ \frac{1}{3}}+\exp(\frac{x_{4}}{2})+0.3y(t)\}^{2}/3)-10\\ y(t)=(3.2+b_{0})-(0.07+b_{1})t\end{cases}\)\(\bullet\): Case 15: \(\begin{cases}g(X,y(t))=(\{x_{1}^{2}x_{2}^{3}+\log(x_{3}+1)+(0.3y(t)x_{4}+1)^{ \frac{1}{3}}+\exp(\frac{x_{4}}{2})+0.3y(t)\}^{2}/3)-10\\ y(t)=(3.2+b_{0})-(0.07+b_{1})t\end{cases}\)\(\bullet\): Case 16: \(\begin{cases}g(X,y(t))=(\{x_{1}^{2}x_{2}^{3}+\log(x_{3}+1)+(0.3y(t)x_{4}+1)^{ \frac{1}{3}}+\exp(\frac{x_{4}}{2})+0.3y(t)\}^{2}/3)-10\\ y(t)=(3.2+b_{0})-(0.07+b_{1})t\end{cases}\)\(\bullet\): Case 17: \(\begin{cases}g(X,y(t))=(\{x_{1}^{2}x_{2}^{3}+\log(x_{3}+1)+(0.3y(t)x_{4}+1)^{ \frac{1}{3}}+\exp(\frac{x_{4}}{2})+0.3y(t)\}^{2}/3)-10\\ y(t)=( * Case 4: \[\begin{cases}g(X,y(t))=(\{x_{1}^{2}x_{2}^{3}+\log(x_{3}+1)+(0.3y(t)x_{4}+1)^{\frac{ 1}{3}}+\exp(\frac{x_{4}}{2})+0.3y(t)\}^{2}/3)-10\\ y(t)=(3.2+b_{0})-(0.07+b_{1})t+(0.004+b_{2})t^{2}\end{cases}\] The time-dependent Cox model was fitted through the R function {survival::coxph} (Therneau and Lumley, 2015). For joint modeling, we used the R package {JMBayes} (Rizopoulos, 2014; Rizopoulos et al., 2021) and included the linear effect of time \(t\) in the longitudinal sub-model. The time-dependent Brier Score and time-dependent AUC were calculated through the R packages {pec} (Mogensen et al., 2012) and the R package {timeROC} (Blanche et al., 2013), respectively. In each setting, we performed 100 simulation runs. Models were fitted on the training set and the prediction metrics were evaluated over the separate test samples. We compared \(n_{train}=500\) and 1000 to evaluate the sample size effect on the performance of SNN, since a large sample size is usually required to train a neural network well. Separate test samples with \(n_{test}=200\) were generated in each run to evaluate the fitted models. We set landmark time \(t=1\) in all simulations. Comparisons were made across seven models including the landmark time-dependent Cox model (LM-CoxPH), the landmark time-dependent Cox SNN (LM-CoxSNN), the time-dependent Cox model (CoxPH), the time-dependent Cox SNN (CoxSNN), the Joint modeling (JM), the Kaplan-Meier (KM), and the true conditional survival curve (Truth). The LM-CoxPH and LM-CoxSNN were fitted among training samples still at risk at the landmark time. CoxPH and CoxSNN were fitted over all training subjects. For the longitudinal sub-model in JM, we included the main effect of time in the fixed-effects part and an intercept and a time term in the random-effect design matrix. Predictions using the KM and the true model represented the worst and the best prediction we could obtain. Calibration metric \(\hat{BS}(t,\Delta t)\) and discrimination metric \(A\hat{UC}(t,\Delta t)\) for the seven methods were calculated at \(\Delta t=1,2,3,4\) from the landmark time. [Figure 2 about here.] #### Simulation 2: High-dimensional predictors We also evaluated the performance of time-dependent Cox SNN with high-dimensional predictors. The simulation mechanism is the same as simulation 1 in section 6.2.1. After data was generated, at each visit time, we mapped the true risk score \(g(X,y(t))\) to handwritten digit images from the MNIST (LeCun 1998). Specifically, we standardized \(g(X,y(t))\) to \([0,0.99]\) and rounded it to 2 decimal places. Then the two handwritten digit images representing the tenth and hundredth numbers were randomly sampled in the corresponding digit class of MNIST. The two \(28\times 28\times 1\) images are treated as the observed predictor at time \(t\) to train the model as well as make the prediction. To deal with the images, the convolutional layers and max pooling layers were added on top of the SNN structure introduced in section 6. Details can be found in Appendix. The time-dependent Cox SNN (td-CoxSNN) was fitted directly using the longitudinal images. As a comparison, the baseline Cox SNN (Base-CoxSNN) was fitted on the baseline images only. The baseline Cox model (Oracle-BaseCoxPH) and time-dependent Cox model (Oracle-tdCoxPH) were fitted on the true \(g(X,y(t))\) to represent the best performance that CoxSNN can achieve with image predictors. * Case 5: \(\begin{cases}g(X,y(t))=x_{1}+2x_{2}+3x_{3}+4x_{4}+0.3y(t)-5\\ y(t)=(3.2+b_{0})-(0.07+b_{1})t\\ g(X,y(t))=x_{1}+2x_{2}+3x_{3}+4x_{4}+0.3y(t)-5\\ y(t)=3.2+b_{0}\end{cases}\) Two simulation cases were considered. The setting of Case 5 is to evaluate the performance of the proposed method with time-varying longitudinal high-dimensional predictors (images). Case 6 is a scenario without time-varying covariates but the longitudinal images vary since the mapping of risk scores to images was performed separately on each visit. Therefore, the only difference between Base-CoxSNN and td-CoxSNN is that td-CoxSNN would see more images. We performed 100 simulation runs for each case \(n_{train}=2,000\) and \(10,000\). The prediction accuracy \(\hat{BS}(t,\Delta t)\) and \(\hat{AUC}(t,\Delta t)\) was evaluated on a separate \(n_{test}=200\) test samples at \(t=1\) and \(\Delta t=1,2,3,4\). ### Simulation results #### 6.3.1 Simulation 1: Low dimensional predictors Figure 2 and Figure 3 display the boxplots of cdaUC and BS at times 1,2,3,4 after the landmark time over 100 simulation runs for each case. The plots indicate that the model with better discrimination ability (the higher cdaUC) tends to have better calibration ability (the lower BS) as well. In general, the CoxSNN is strongly competitive at the different time points after the landmark time. Under the complex setting of case 3 and case 4, CoxSNN outperformed CoxPH, and LM-CoxSNN outperformed LM-CoxPH. In simpler case 1 and case 2 where the effects of predictors are linear, CoxSNN and LM-CoxSNN performed similarly to the CoxPH and LM-CoxPH. This demonstrates that the time-dependent Cox SNN is able to learn the nonlinear effect in complex settings while maintaining a similar performance as the time-dependent Cox model when the effect is linear. Besides, JM performed the best when both the effect of longitudinal predictors and the sub-model for the longitudinal process were correctly specified (case 1). In case 3, JM correctly modeled the longitudinal process but didn't outperform neural network models as it failed to reflect the nonlinear effect of the predictors. Moreover, JM performed the worst in cases 2 and 4, when the longitudinal sub-model was incorrectly specified. This implies the performance of joint modeling highly depends on the correctness of the longitudinal sub-model. We further evaluate the choice of sample for the neural network fitting. In case 1 and case 2, the prediction accuracy of models fitted with all subjects (CoxPH and CoxSNN) is close to the accuracy of landmarking models fitted with those still time (LM-CoxPH and LM-CoxSNN). In settings where the effect of predictors is complex (case 3 and case 4), the landmarking methods demonstrated slightly better performance, suggesting that landmarking can potentially enhance prediction accuracy, particularly when there is a landmark time of interest. The landmark individuals can serve as an appropriate representation of the target population who survived the landmark time. Lastly, when \(n_{train}\) decreased from 1000 to 500, the performance of SNN is relatively stable. Note that when \(n_{train}=500\) with 80% censoring rate, there is only 100 observed event in the training data, suggesting that the SNN can perform well with moderate sample sizes. Overall, our simulation shows that time-dependent Cox survival neural network can achieve satisfactory prediction performance across diverse settings. #### 6.3.2 Simulation 2: High-dimensional predictors Figure 4 presents the results of the simulation with longitudinal images as predictors in case 6. In this scenario, the risk score \(g(X,\mathcal{Y}(t))\) is constant over time while the longitudinal images mapped to the risk score are time-dependent (the information embedded in images is time-independent). For oracle-CoxPH models, since they used the true risk score as the predictor and it is time-independent, they had the same prediction accuracy as the prediction from the real survival curve. When using the images as predictors, td-CoxSNN outperformed baseline-CoxSNN, especially when the training sample size (\(n_{train}=2,000\)) is small. This demonstrates the priority of td-CoxSNN over baseline-CoxSNN even the longitudinal images are uninformative. When the image embeddings are time-varying (case 5), the baseline models were less accurate than the time-dependent models, as the baseline models ignored the longitudinal measurements and hence failed to evaluate the effects of time-varying predictors. Additionally, the baseline-CoxSNN may also be suffered from seeing fewer images as demonstrated above. The boxplots of cdaUC and BS for case 5 are provided in the appendix. [FIGURE:S6.F4][ENDFIG ## 7 Real data analysis ### Application to AMD progression prediction We applied the proposed time-dependent SNN on AREDS data and built a dynamic prediction model using longitudinal fundus images. In AREDS data, at each follow-up for each eye, there is a fundus image along with multiple manually graded image features performed by a medical center, for example, the size of the abnormal area (drusen) of a fundus image. After removing the eyes with late-AMD at baseline and images with low quality (i.e., image features not gradable or missing), our working data included 53,076 eye-level observations from 7,865 eyes of 4,335 subjects. The median follow-up time is 9.9 years, and 20.5% of the eyes progressed to late-AMD by the end of the study (Table 1). For the prediction perspective, we excluded observations that were measured after the diagnosis of late-AMD and conducted the eye-level analysis without considering the correlation between the two eyes from the same individual. We performed a 5-fold cross-validation as follows. Data were split into five folds where models were trained on each 4-folds (training set) and evaluated on the remaining 1-fold (test set). The split was done on the subject level to ensure the two eyes from the same subject were included in the same fold. To fit the time-dependent Cox model and the time-dependent Cox SNN, longitudinal data were formatted into multiple intervals [tstart, tstop) where each interval represents the time window between one visit and the next visit (Figure 1a). To evaluate the prediction accuracy, because the fundus images were taken at different time points across subjects, on the test set, we chose individualized landmark time \(t_{i}\) as follows. For each subject \(i\) in the test set, \(t_{i}\) was chosen randomly from their longitudinal measurement time points prior to the last observed time \(T_{i}\). This is to mimic the real-world scenario when a new subject \(i\) comes at time \(t_{i}\), we use their measurements up to \(t_{i}\) to predict their survival at \(t_{i}+\Delta t\). Prediction metrics (BS and cdauc) were evaluated on the \(\Delta t=1,\ldots,7\) windows across all subjects (Figure 5a). We compared three dynamic prediction approaches: joint modeling (JM), time-dependent Cox model (CoxPH), and time-dependent Cox SNN (CoxSNN). The predictors include baseline demographic variables (age at enrollment, educational level, and smoking status) and seven longitudinal manually graded image features, which were found significant from a multi-variable baseline Cox regression model (Table 1). We also fitted the time-dependent Cox SNN directly on the longitudinal fundus images where the images were modeled by a CNN. Specifically, on top of the survival neural network, we added DeepSeeNet, which is a well-trained CNN for grading late-AMD using fundus images (Peng et al., 2019, 2020). The 256 nodes from the DeepSeeNet hidden layer and the baseline demographic nodes together formed the input layer of time-dependent Cox SNN. The details of the survival neural network structure and hyper-parameters can be found in section 6.1. Figure 5a shows that the Brier Scores are lowest for the time-dependent Cox SNN either using the seven manually graded image features or directly using the fundus images across all prediction time points. As for the cdauc, the time-dependent Cox SNN using the longitudinal images is comparable to the three dynamic prediction models fitted on the seven manually graded image features. In practice, grading the image features is labor intensive and requires special medical image expertise. This prediction model (time-dependent Cox SNN with fundus images) can directly handle the raw colored fundus images without any further input from clinicians. Overall, joint modeling performed the worst in terms of the two accuracy metrics except for the long-term prediction (\(\Delta t=7\)). For the interpretation of the SNN model fitted with longitudinal images, we generated the saliency map to visualize the regions with the most significant impact on the risk score for a given subject. Figure 5b displays a fundus image from a participant's left eye at year 2.3 (since being on the AREDS study) where the eye showed multiple large drusens (i.e., the yellow spots). Our model predicts this eye will develop late-AMD with a high probability (60.1% since \(\Delta t=2\) year and 70.5% since \(\Delta t=2.8\) year) and the truth is this eye developed the disease 2.7 years later. We can see the saliency map successfully detects the pathological areas which are predictive of the disease progression. After completing the model training, it is also possible to identify individuals with a higher risk of developing the disease through the estimated risk score \(\hat{g}\), which is the output of tdCoxSNN. For illustrative purposes, we used the training and test data from the first cross-validation split. The baseline risk score \(\hat{g}\) for 1,582 eyes in test data was estimated using their baseline fundus images and demographics. We identified two subgroups from the gaussian mixture model and compared their survival curves (Figure 6). During the follow-up period, a majority of the individuals in high-risk groups developed the disease, whereas those in low-risk groups maintained better health. We further compared the seven baseline manually extracted image features between the two groups (Table 2). The baseline fundus images in the high-risk group exhibited a greater number of higher-risk image features (\(p<0.001\), for each feature), which are significantly associated with the late-AMD (Table 1). The difference between the two groups for subject-level characteristics (excluding those with two eyes of differing risk) was found to be small. This suggests that the identification of subgroups was primarily based on the baseline fundus image. ### Application to PBC2 data We also applied the proposed method (continuous-time model) to a publicly-available dataset with low-dimensional longitudinal predictors and compared with three discrete-time deep learning models. This data was collected between 1974-1984 for the research of primary biliary cirrhosis (PBC) disease (Fleming and Harrington, 2011). The dataset consists of 312 subjects (1,912 visits) with the event of interest being time-to-liver transplant and a censoring rate of 55%. The predictors include 12 longitudinal variables (7 continuous lab tests, such as albumin, and 5 categorical, such as liverenlargement) and 3 baseline variables (e.g. gender, age at start-of-study, and treatment indicator). We followed the data processing steps from Putter and van Houwelingen (2017) for a fair comparison. The LM-tdCoxSNN (fitted for each landmark time) and tdCoxSNN (using all subjects) were fitted on the discrete-time scale. The prediction accuracies for the three discrete-time deep learning models were obtained directly from (Putzel et al., 2021). Each entry represents an average of accuracy calculated at 2,4,6, and 8 months after the landmark times. Table 3 presents the prediction accuracy of the five methods across different landmark times. There is no universally optimal deep learning method, and our method is comparable to the existing discrete-time methods when applied to the discrete-time data. We observed that tdCoxSNN outperformed LM-tdCoxSNN in terms of both BS and dynamic C-Index, suggesting that the proposed method could benefit from retaining more samples during training, especially given the relatively small total sample size (1,912 visits in total). [Table 3 about here.] ## 8 Discussion We combined the time-dependent Cox model with the survival neural network to establish a dynamic prediction model in a continuous time scale. The proposed approach not only provides a powerful tool to model the non-linear effect of predictors on the risk but also allows users to directly incorporate the longitudinal high-dimensional features such as images without modeling the longitudinal process. Due to the neural network nature of SNN, existent neural networks can be added on top of the time-dependent Cox SNN to take advantage of well-developed deep learning structures for complex data, for example, RNN for sequential data (Lee and Dernoncourt 2016) and CNN for images (He et al. 2016). With the availability of more and more high-dimensional biomarkers with non-linear effects and unstructured longitudinal trends, our approach makes it possible to build a dynamic prediction model using complex longitudinal biomarkers (e.g., MRI, metabolomics data) for future research. A limitation of the proposed method is the Last Observation Carried Forward (LOCF) assumption between visits, which may not accurately reflect reality. One potential direction for future research involves integrating joint modeling with machine learning techniques, although this could be computationally demanding. Additionally, our method was only compared to a limited number of discrete-time machine learning dynamic prediction approaches using a single dataset (section 7.2). Further efforts are necessary to thoroughly benchmark these methods across various settings. Compared with discrete time dynamic prediction models (Lee et al. 2019; Tanner et al. 2021; Lin and Luo 2022), our model does not require the selection of time intervals for discretization, which makes it easier to process the longitudinal data for model fitting. Our model achieves satisfactory prediction performance in both simulation and real data analysis. It is worth noting that the same SNN structure and hyper-parameters introduced in section 6.1 worked well through all analyses. The training procedure with 20 epochs makes fitting the SNN very fast. Additional tuning of the hyper-parameters may further improve the prediction accuracy of the model. In this work, we used the saliency map to help interpret the fitted SNN model. One alternative solution for model interpretation is to use a partially linear Cox model where the risk score consists of a parametric component for predictors of interpretation interest and a nonparametric component modeled through the neural network. Zhong et al. (2022) proved the semiparametric efficiency of the parametric estimator which allows the method to make inferences along with using the SNN model for nuisance covariates. For future research, one may consider the partially linear structure in the time-dependent Cox SNN to improve model interpretability while maintaining the flexibility for modeling complex non-linear effects.
2302.03390
Learning Discretized Neural Networks under Ricci Flow
In this paper, we study Discretized Neural Networks (DNNs) composed of low-precision weights and activations, which suffer from either infinite or zero gradients due to the non-differentiable discrete function during training. Most training-based DNNs in such scenarios employ the standard Straight-Through Estimator (STE) to approximate the gradient w.r.t. discrete values. However, the use of STE introduces the problem of gradient mismatch, arising from perturbations in the approximated gradient. To address this problem, this paper reveals that this mismatch can be interpreted as a metric perturbation in a Riemannian manifold, viewed through the lens of duality theory. Building on information geometry, we construct the Linearly Nearly Euclidean (LNE) manifold for DNNs, providing a background for addressing perturbations. By introducing a partial differential equation on metrics, i.e., the Ricci flow, we establish the dynamical stability and convergence of the LNE metric with the $L^2$-norm perturbation. In contrast to previous perturbation theories with convergence rates in fractional powers, the metric perturbation under the Ricci flow exhibits exponential decay in the LNE manifold. Experimental results across various datasets demonstrate that our method achieves superior and more stable performance for DNNs compared to other representative training-based methods.
Jun Chen, Hanwen Chen, Mengmeng Wang, Guang Dai, Ivor W. Tsang, Yong Liu
2023-02-07T10:51:53Z
http://arxiv.org/abs/2302.03390v4
# Learning Discretized Neural Networks under Ricci Flow ###### Abstract In this paper, we consider Discretized Neural Networks (DNNs) consisting of low-precision weights and activations, which suffer from either infinite or zero gradients due to the non-differentiable discrete function in the training process. In this case, most training-based DNNs employ the standard Straight-Through Estimator (STE) to approximate the gradient w.r.t. discrete values. However, the STE gives rise to the problem of gradient mismatch, due to the perturbations of the approximated gradient. To address this problem, this paper reveals that this mismatch can be viewed as a metric perturbation in a Riemannian manifold through the lens of duality theory. Further, on the basis of the information geometry, we construct the Linearly Nearly Euclidean (LNE) manifold for DNNs as a background to deal with perturbations. By introducing a partial differential equation on metrics, i.e., the Ricci flow, we prove the dynamical stability and convergence of the LNE metric with the \(L^{2}\)-norm perturbation. Unlike the previous perturbation theory whose convergence rate is the fractional powers, the metric perturbation under the Ricci flow can be exponentially decayed in the LNE manifold. The experimental results on various datasets demonstrate that our method achieves better and more stable performance for DNNs than other representative training-based methods. D 1 Footnote 1: 1. In this paper, the continuous weight is relative to the neural network (its data type is full-precision). And the discretized weight is relative to the discretized neural network (its data type is low-precision). D 1 Footnote 1: 1. In this paper, the continuous weight is relative to the neural network (its data type is full-precision). And the discretized weight is relative to the discretized neural network (its data type is low-precision). ## 1 Introduction Discretized neural networks (DNNs) (Courbariaux et al., 2016; Li et al., 2016; Zhu et al., 2016) have been proven to be efficient in computing, which can significantly reduce computational complexity, storage space, power consumption, resources, etc (Chen et al., 2020). Considering a Discretized neural network (DNN) which is able to be well-trained, based on the standard chain rule, the gradient w.r.t. the continuous weight1\(\mathbf{w}\) propagating through a discrete function \(Q(\cdot)\), i.e., \(\frac{\partial L}{\partial\mathbf{w}}=\frac{\partial L}{\partial Q(\mathbf{w})}\frac{ \partial Q(\mathbf{w})}{\partial\mathbf{w}}\), suffers from either infinite or zero deriva tives. Its root is that the derivative \(\partial Q(\mathbf{w})/\partial\mathbf{w}\) can not be calculated. In the backward, one can obtain the gradient \(\partial L/\partial Q(\mathbf{w})\), but need to update the continuous weight \(\mathbf{w}\) via the gradient \(\partial L/\partial\mathbf{w}\). Since the gradient \(\partial L/\partial\mathbf{w}\) can not be obtained explicitly, one needs the derivative \(\partial Q(\mathbf{w})/\partial\mathbf{w}\) as a bridge to calculate \(\partial L/\partial\mathbf{w}\) via the chain rule. In order to address the problem of either infinite or zero gradients caused by the non-differentiable discrete function, Hinton (2012) first proposed the Straight-Through Estimator (STE) that yields an immediate connection between \(\partial L/\partial\mathbf{w}\) and \(\partial L/\partial Q(\mathbf{w})\) in backpropagation such that bypassing the derivative \(\partial Q(\mathbf{w})/\partial\mathbf{w}\). The definition of STE was then given by Bengio et al. (2013), which can be summarized as: the gradient w.r.t. the discretized weight can be approximated by the gradient w.r.t. the continuous weight with clipping, as shown in Figure 1(a). Subsequently, Courbariaux et al. (2016) applied STE to binarized neural networks and provided an approximated gradient as follows: \[\frac{\partial L}{\partial\mathbf{w}}=\frac{\partial L}{\partial\operatorname{ sign}(\mathbf{w})}\cdot\mathbb{I},\ \ \ \text{where}\ \mathbb{I}:=\left\{\begin{array}{ll}1&\text{if}\quad|\mathbf{w}|\leq 1 \\ 0&\text{otherwise}\end{array}\right., \tag{1}\] where \(\mathbb{I}\) is the indicator function. Note that \(Q(\cdot)\) will degenerate to \(\operatorname{sign}(\cdot)\) which is equal to \(+1\) for \(\mathbf{w}\geq 0\) and \(-1\) otherwise in binarized neural networks. STE had been successfully Figure 1: Comparison of STE and our method. We denote the arrows and points as gradients and weights, respectively. In particular, when a point falls on the grid point, it means that the weight is discretized at this time. In the forward of DNNs, the continuous weight \(\mathbf{w}\) is mapped to a discrete weight \(Q(\mathbf{w})\) via a discrete function. In the backward, the gradient is propagated from \(\partial L/\partial Q(\mathbf{w})\) to \(\partial L/\partial\mathbf{w}\). (a) The STE simply copies the gradient, i.e., \(\partial L/\partial\mathbf{w}=\partial L/\partial Q(\mathbf{w})\). (b) Our method, on the other hand, matches the gradient by introducing the proper metric \(g_{\mathbf{w}}\), i.e., \(\partial L/\partial\mathbf{w}=g_{\mathbf{w}}^{-1}\partial L/\partial Q(\mathbf{w})\), while taking into account the gradient mismatch caused by STE in a Riemannian manifold. implemented in the training of binarized neural networks, and it was further extended to ternary neural networks (Li et al., 2016) and arbitrary bit-width discretized neural networks (Zhou et al., 2016). On the other hand, Non-STE methods consist in all techniques that do not rely on STE, e.g., (Hou et al., 2016; Bai et al., 2018; Leng et al., 2018). However, the learning process of Non-STE methods depends heavily on hyper-parameters (Chen et al., 2019), such as weight partition portion in each iteration (Zhou et al., 2017) and penalty setting in tuning (Leng et al., 2018). Hence, STE methods are widely used in DNNs rather than Non-STE methods owing to their simplicity and versatility. However, STE introduced into DNNs, inevitably gives rise to the problem of _gradient mismatch_: the gradient w.r.t. the continuous weight is not strictly equal to the gradient w.r.t. the discretized weight when \(|\mathbf{w}|\leq 1\)(Chen et al., 2019), which compromises the training stability of DNNs (Cai et al., 2017; Liu et al., 2018; Qin et al., 2020). Furthermore, the formula of STE tells us that this problem is able to be alleviated by modifying the gradient \(\partial L/\partial\mathbf{w}\). Zhou et al. (2016) firstly proposed to transform the weight \(\mathbf{w}\) into the new one \(\tilde{\mathbf{w}}\) via \[\tilde{\mathbf{w}}=\frac{\tanh(\mathbf{w})}{\max(|\tanh(\mathbf{w})|)}.\] By discretizing the new weight \(\tilde{\mathbf{w}}\), the STE then acts on \(\tilde{\mathbf{w}}\). During back-propagation, the gradient can be further computed as follows \[\frac{\partial L}{\partial\mathbf{w}}=\frac{\partial L}{\partial Q(\tilde{\mathbf{w} })}\frac{1-\tanh^{2}(\mathbf{w})}{\max(|\tanh(\mathbf{w})|)}.\] The purpose of the authors is to manually redefine the indicator function \(\mathbb{I}\) as \(\frac{1-\tanh^{2}(\mathbf{w})}{\max(|\tanh(\mathbf{w})|)}\) such that the function \(\frac{1-\tanh^{2}(\mathbf{w})}{\max(|\tanh(\mathbf{w})|)}\) provides a smooth transition to avoid abrupt clipping of the indicator function near \(\pm 1\). It is remarkable that Chen et al. (2019) proposed to learn \(\partial L/\partial\mathbf{w}\) by a neural network, e.g., fully-connected layers or LSTM (Sak et al., 2014). Their specific approach is to use neural networks as a shared meta quantizer \(M_{\psi}\) parameterized by \(\psi\) across layers to replace the gradient via: \[\frac{\partial L}{\partial\mathbf{w}}=M_{\psi}\left(\frac{\partial L}{\partial Q( \mathbf{w})},\overline{\mathbf{w}}\right)\frac{\partial\overline{\mathbf{w}}}{\partial \mathbf{w}},\] where \(\overline{\mathbf{w}}\) is the weight from the meta quantizer. With the input of the gradient \(\partial L/\partial Q(\mathbf{w})\), the meta quantizer will output a new gradient to match \(\partial L/\partial\mathbf{w}\) by updating the weight \(\overline{\mathbf{w}}\) in the training process. Recently, Ajanthan et al. (2021) formulated the binarization of neural networks as a constrained optimization problem by introducing a mirror descent framework (Beck and Teboulle, 2003) to perform gradient descent in the dual space (unconstrained space) with gradients computed in the primal space (discrete space). In particular, by projecting the primal space \(\mathbf{w}\) into the dual space \(\tilde{\mathbf{w}}=\tanh(\beta_{k}\mathbf{w})\), the gradient can be yielded as \[\frac{\partial L}{\partial\mathbf{w}}=\frac{\partial L}{\partial\tilde{\mathbf{w}}} \left(1-\tanh^{2}(\beta_{k}\mathbf{w})\right).\] As the hyper-parameter \(\beta_{k}\to\infty\), \(\tilde{\mathbf{w}}\) gradually approaches \(\text{sign}(\mathbf{w})\) until the corresponding neural network is fully binarized with an adaptive mirror map. However, Zhou et al. (2016) obtained the new weight by manually setting the function \(\tanh\), which can only scale the gradient as a whole and do not fundamentally alleviate the gradient mismatch. On the other hand, although Chen et al. (2019) automatically matched the gradient by learning a new neural network (a meta quantizer), the additional errors are also engendered in the gradient propagation due to the meta quantizer, leading to further intensifying the problem of gradient mismatch. Subsequently, Ajanthan et al. (2021) bypassed the problem of gradient mismatch because the derivative \(\partial\tilde{\mathbf{w}}/\partial\mathbf{w}=\left(1-\tanh^{2}(\beta_{k}\mathbf{w})\right)\) can be calculated directly, which implies that this method does not maintain discrete weights during the training. Therefore, the problem of gradient mismatch still remains to be solved. ### Contributions In this work, we regard the gradient mismatch between \(\partial L/\partial\mathbf{w}\) and \(\partial L/\partial Q(\mathbf{w})\) as a perturbation phenomenon between these two gradients. By introducing the framework of Riemannian geometry in Figure 1(b), the gradient mismatch is further viewed as a metric perturbation in a Riemannian manifold (Section 2.2) through the lens of duality theory (Amari and Nagaoka, 2000; Amari, 2016). As a partial differential equation on metrics, the Ricci flow (Sheridan and Rubinstein, 2006), is introduced, the metric perturbation can be exponentially decayed in theory such that the problem of gradient mismatch is theoretically solved. The main contributions of this paper are summarized in the following four aspects: * We propose the LNE manifold endowed with the LNE metric, which is a special form of Ricci-flat metrics in essence. According to the information geometry (Amari, 2016), we construct LNE manifolds for neural networks as a background to deal with perturbations. * We reveal the stability of LNE manifolds under the Ricci-DeTurck flow with the \(L^{2}\)-norm perturbation on the basis of the relationship between the Ricci-DeTurck flow and the Ricci flow. In this way, any Ricci flow starting close to the LNE metric exists for all time and converges to the LNE metric. Furthermore, unlike the previous perturbation theory whose convergence rate is the fractional powers (\(t^{-3/2}\)), the metric perturbation under the Ricci flow can give rise to exponentially decaying in the LNE manifold (\(e^{-t}\)), further bringing about theoretical assurance for effectively solving the problem of gradient mismatch. * Based on the appealing characteristics of LNE manifolds under Ricci Flow, a novel DNNs with the acceptable complexity, i.e., Ricci Flow Discretized Neural Network (RF-DNN) is developed by calculating the Ricci curvature in such a way that the selection of coordinate systems is related to the input transformations of neural networks. In essence, the discrete Ricci flow is employed to overcome the problem of gradient mismatch in traditional DNNs. * The experiments are implemented on several classification benchmark datasets and network structures. Experimental results demonstrate the effectiveness of our geometric method RF-DNN compared with other representative training-based methods. ### Overall Organization This paper is organized as follows. In Section 2, we introduce the motivation and Ricci flow. According to the geometric structure measured by the LNE divergence, we deduce the corresponding LNE manifold for neural networks in Section 3. The stability of LNE manifolds under the Ricci-DeTurck is proved in Section 4. In Section 5, we calculate the approximated gradient in the LNE manifold to avoid solving the inverse of the LNE metric. In Section 6, we present how to introduce discrete Ricci flow into DNNs and yield the corresponding algorithm. The experimental results and ablation studies for RF-DNNs are presented in Section 7. Section 8 concludes the entire paper. Proofs are provided in the Appendices. The Ricci flow on Ricci-flat metrics is known in the literature to be stable for \(C^{0}\) perturbations in the \(L^{\infty}\)-norm (Section 2.4). Based on a Bregman divergence (Bregman, 1967), the LNE metric, a special form of Ricci-flat metrics, is introduced in neural networks via the LNE divergence (Theorem 10). Stability of LNE manifolds under the Ricci-DeTurck flow is then proved (Corollary 15 and Theorem 16). A discretization of the Ricci flow is therefore proposed, that leads to a practical algorithm (RF-DNNs, Algorithm 2). ## 2 Motivation and Formulation ### Background We start with the basic background for feed-forward DNNs that will be used throughout the paper. This background is on the basis of the work (Martens and Grosse, 2015). And we list the important notations in Appendix B. A neural network can be regarded as a function to transform the input \(\mathbf{a}_{0}\) into the output \(\mathbf{a}_{l}\) through a series of \(l\) layers. For the \(i\)-th layer (\(i\in\{1,2,\ldots,l\}\)), we denote \(\mathbf{W}_{i}\) as the weight matrix, \(\mathbf{s}_{i}\) as the vector of these weighted sum and \(\mathbf{a}_{i}\) as the vector of output (also known as the activation). Each layer receives vectors of a weighted sum of the input from the previous layer and calculates their output via a nonlinear function. For a DNN, we need to add a discrete function \(Q(\cdot)\) to discretize the weight matrix \(\mathbf{W}_{i}\) and the activation vector \(\mathbf{a}_{i}\). Furthermore, we mark the discretized weight matrix as \(\mathbf{W}_{i}=Q(\mathbf{W}_{i})\) and the discretized activation vector as \(\mathbf{\hat{a}}_{i}=Q(\mathbf{a}_{i})\). Then the feed-forward of DNNs at each layer is given as follows: \[\mathbf{s}_{i} =\mathbf{\hat{W}}_{i}\mathbf{\hat{a}}_{i-1} \tag{2}\] \[\mathbf{a}_{i} =f\odot\mathbf{s}_{i}\] \[\mathbf{\hat{a}}_{i} =Q(\mathbf{a}_{i})\] where \(f\) is a nonlinear (activation) function, \(\odot\) represents the element-wise multiplication. We vectorize \(\mathbf{\hat{W}}_{i}\) as \(\text{vec}(\mathbf{\hat{W}}_{i})\) and stack their columns together, where \(\text{vec}(*)\) is the operator that vectorizes a matrix as a column vector. Note that the vectorized weights in each layer before and after discretization are expressed as \(\mathbf{w}\) and \(\mathbf{\hat{w}}\), respectively. We define the new discretized parameter vector as \(\mathbf{\hat{\xi}}=\left[\text{vec}\left(\mathbf{\hat{W}}_{1}\right)^{\top},\text{ vec}\left(\mathbf{\hat{W}}_{2}\right)^{\top},\ldots,\text{vec}\left(\mathbf{\hat{W}}_{l} \right)^{\top}\right]^{\top}\), where we ignore the bias vector for brevity. Similarly, we can denote the parameter vector as \(\mathbf{\xi}=\left[\text{vec}\left(\mathbf{W}_{1}\right)^{\top},\text{vec}\left(\mathbf{ W}_{2}\right)^{\top},\ldots,\text{vec}\left(\mathbf{W}_{l}\right)^{\top}\right]^{\top}\) where \(\mathbf{\hat{\xi}}=Q(\mathbf{\xi})\). In frequentist statistics, we represent the loss function as the negative log likelihood w.r.t. discretized parameter \(\hat{\mathbf{\xi}}\), where the input \(\mathbf{a}_{0}\) can be observed. The way minimizing the loss function is to maximize the likelihood: \[L(\mathbf{y},\mathbf{z})=-\log p\left(\mathbf{y}|\mathbf{z},\hat{\mathbf{\xi}}\right) \tag{3}\] where \(\mathbf{y}\) is the target and \(\mathbf{z}\) is the prediction calculated by DNNs. Note that \(p\left(\mathbf{y}|\mathbf{z},\hat{\mathbf{\xi}}\right)\) is the probability density function defined by the conditional probability distribution \(P_{\mathbf{y}|\mathbf{z}}\left(\hat{\mathbf{\xi}}\right)\) of DNNs. As for the back-propagation of DNNs, the details are given in later sections. ### Motivation We consider that the root of the gradient mismatch is that there is a perturbation phenomenon between \(\partial L/\partial\mathbf{w}\) and \(\partial L/\partial Q(\mathbf{w})\), i.e., \[\frac{\partial L}{\partial\mathbf{w}}=\mathcal{P}\left(\frac{\partial L}{ \partial Q(\mathbf{w})}\right)\bigg{|}_{O\left(\frac{\partial L}{\partial Q(\bm {w})}\right)} \tag{4}\] where the input of the perturbation function \(\mathcal{P}\) is the gradient \(\partial L/\partial Q(\mathbf{w})\), and the corresponding perturbation range is \(O\left(\partial L/\partial Q(\mathbf{w})\right)\). If the perturbation range \(O\left(\partial L/\partial Q(\mathbf{w})\right)\) can be significantly eliminated or decayed, then there is an elegant solution to the problem of gradient mismatch. In the framework of perturbation theory of the linear space (Kato, 2013), the rate of convergence for perturbations is the fractional powers, e.g., \(t^{-3/2}\). Figure 2: The overview of the theoretical ideas in this paper. Inspired by the mirror descent 2 framework (Beck and Teboulle, 2003), one can map the parameter from the primal space to the dual space, and then calculate the gradient in the dual space. When the Riemannian metric structures are introduced by means of information geometry (Amari, 2016), naturally 3, the gradient mismatch is conclusively viewed as a metric perturbation in a Riemannian manifold. Specifically, with the help of Definition 1, it is obvious that the gradient \(\partial L/\partial\mathbf{w}\) will be rewritten as \(\tilde{\partial}L/\tilde{\partial}\mathbf{w}\) in a Riemannian manifold. Therefore, based on Equation (4), the problem of gradient mismatch can be defined as Footnote 2: Mirror descent induces non-Euclidean structure by solving iterative optimization problems using different proximity functions (Bubeck et al., 2015). Footnote 3: Natural gradient descent selects the steepest descent along a Riemannian manifold by multiplying the standard gradient by the inverse of the metric tensor (Amari, 1998).It is worth mentioning that mirror descent and natural gradient descent are proven to be equivalent to each other (Raskutti and Mukherjee, 2015), which implies that mirror descent is the steepest descent direction along the Riemannian manifold corresponding to the choice of Bregman divergence. \[\frac{\tilde{\partial}L}{\tilde{\partial}\mathbf{w}}=g_{\mathbf{w}}^{-1}\frac{ \partial L}{\partial Q(\mathbf{w})} \tag{5}\] where the perturbation item is implied in the metric \(g_{\mathbf{w}}\) and the term \(\frac{\partial L}{\partial Q(\mathbf{w})}\) can be considered as the conventional gradient \(\partial L(\mathbf{w})\) in Definition 1. Then the metric perturbation phenomenon emerges, and the perturbation at this time is referred to the deviation from the original metric. In this way, we present the generalization of STE in a Riemannian manifold which will degenerate into the standard STE when the Riemannian metric \(g\) returns to the Euclidean metric \(\delta\). **Definition 1**: _(Amari, 1998) The steepest descent direction of \(L(\mathbf{w})\) in a Riemannnian manifold, i.e., the **natural gradient descent,** is given by_ \[\tilde{\partial}L(\mathbf{w})=g_{\mathbf{w}}^{-1}\partial L(\mathbf{w})\] _where \(g^{-1}=(g^{ij})\) is the inverse of the metric \(g=(g_{ij})\) and \(\partial L(\mathbf{w})\) is the conventional gradient,_ \[\partial L(\mathbf{w})=\left[\frac{\partial L(\mathbf{w})}{\partial w_{1}},\cdots, \frac{\partial L(\mathbf{w})}{\partial w_{n}}\right]^{\top}.\] Subsequently, a key question is what kind of manifolds we need to construct to deal with metric perturbations naturally and easily. Or what manifold is a "good" manifold in the presence of perturbations. In practice, it seems that general relativity gives an excellent example in nature to deal with small gravitational perturbations under the framework of a Riemannian manifold (Wald, 2010). To treat the approximation in which gravity is "weak", the spacetime metric is nearly flat in the context of general relativity, which is enough for most cases, except for phenomena dealing with gravitational collapse and the large scale structure of the universe. By assuming that the deviation \(\gamma_{ij}\) of the actual spacetime metric \(g_{ij}=\eta_{ij}+\gamma_{ij}\) from a flat metric \(\eta_{ij}\) is "small", the linearized gravity is presented to approximately calculate the gravity in general relativity. An adequate definition of "smallness" in this context is that the components of \(\gamma_{ij}\) are much smaller than \(1\) in the global inertial coordinate system of \(\eta_{ij}\). Such a linearly nearly flat metric greatly simplifies the calculation of "weak" gravity, and the manifolds constructed with such metrics are considered to be sufficient for approximating the manifold with perturbations. Similarly, in this paper, by regarding the Euclidean metric \(\delta_{ij}\) as the flat metric \(\eta_{ij}\), we can naturally define the Linearly Nearly Euclidean (LNE) metric. If we can construct such a metric for DNNs, then a metric perturbation can be handled in the background of LNE manifolds. Motivated by the natural gradient descent (Amari, 2016) that connects neural networks with the Riemannian metric, LNE metrics can be mathematically constructed in neural networks. Specifically, our aim is to first introduce a convex function to derive the LNE divergence with the help of Bregman divergence (Bregman, 1967), and then construct the LNE metric by introducing the LNE divergence into a neural network 4. Consequently, the LNE metric appears in the gradient of the LNE manifold, similar to Definition 1. Finally, based on the constructed manifold for DNNs, the only remaining problem is how to rapidly decay the metric perturbation by employing a geometric tool, i.e., Ricci flow. Footnote 4: The part from a convex function to the LNE divergence is in the mirror descent framework. Then, the part from the LNE divergence to the LNE metric introduces the idea of information geometry. In addition, a series of proofs about the stability illustrates that the Ricci flow can decay the metric perturbation in the cases of Ricci-flat metrics. Therefore, as long as we can prove that a small perturbation of the LNE metric under the Ricci flow is decayed, the metric perturbation can be alleviated such that a theoretical solution for the problem of gradient mismatch in the training of DNNs is further given. Unlike the previous perturbation theory whose the convergence rate is the fractional powers, the metric perturbation under the Ricci flow can be exponentially decayed in the LNE manifold. In Figure 2, we give the overview of the theoretical ideas to facilitate sorting out the solution steps. ### Ricci Flow **Definition 2**: _(Sheridan and Rubinstein, 2006) A **Riemannian metric** on a smooth manifold \(\mathcal{M}\) is a smoothly-varying inner product on the tangent space \(T_{p}\mathcal{M}\) at each point \(p\in\mathcal{M}\), i.e., a (0,2)-tensor which is symmetric and positive-definite at each point of \(\mathcal{M}\). One will usually write \(g\) for a Riemannian metric, and \(g_{ij}\) for it coordinate representation. A manifold together with a Riemannian metric, \((\mathcal{M},g)\), is called a **Riemannian manifold**._ The concept of Ricci flow was first proposed by Hamilton (Hamilton et al., 1982) on the Riemannian manifold \(\mathcal{M}\), based on Definition 2, of a time-dependent metric \(g(t)\). Given the initial metric \(g_{0}\), the Ricci flow is a partial differential equation that evolves the metric tensor: \[\frac{\partial}{\partial t}g(t) =-2\operatorname{Ric}(g(t)) \tag{6}\] \[g(0) =g_{0}\] where \(\operatorname{Ric}\) denotes the Ricci curvature tensor and the detailed definition can be found in Appendix A. The purpose of Ricci flow is to prove Thurston's Geometrization Conjecture and Poincare Conjecture, which is to evolve the metric towards certain geometric structures and topological properties (Sheridan and Rubinstein, 2006). **Corollary 3**: _(Sheridan and Rubinstein, 2006) The Ricci flow is **strongly parabolic** if there exists \(\delta>0\) such that for all covectors \(\varphi\neq 0\) and all (symmetric 5) \(h_{ij}=\frac{\partial}{\partial t}g_{ij}(t)\neq 0\), the principal symbol of the differential operator \(-2\operatorname{Ric}\) satisfies_ Footnote 5: The Riemannian metric \(g_{ij}\) is always symmetric based on Definition 2. Hence, \(h_{ij}=\frac{\partial}{\partial t}g_{ij}(t)\) is required to be symmetric. \[[-2\operatorname{Ric}](\varphi)(h)_{ij}h^{ij}=g^{pq}\left(\varphi_{p}\varphi_{ q}h_{ij}+\varphi_{i}\varphi_{j}h_{pq}-\varphi_{q}\varphi_{i}h_{jp}-\varphi_{q} \varphi_{j}h_{ip}\right)h^{ij}>\delta\varphi_{k}\varphi^{k}h_{rs}h^{rs}\] _where \(h^{ij}\) is the inverse of \(h_{ij}\)._ **Theorem 4**: _(Ladyzhenskaia et al., 1988) Suppose that \(u(t):\mathcal{M}\times[0,T)\to\mathcal{E}\) is a time-dependent section of the vector bundle \(\mathcal{E}\) where \(\mathcal{M}\) is a Riemannian manifold. If the system of the Ricci flow is strongly parabolic at \(u_{0}\) where \(u_{0}=u(0):\mathcal{M}\to\mathcal{E}\), then there exists a unique solution on the time interval \([0,T)\)._ Combined with Corollary 3 and Theorem 4, one knows whether there exists a unique solution of Ricci flow for a short time by checking whether it is strongly parabolic. However, if we choose \(h_{ij}=\varphi_{i}\varphi_{j}\), it is clear that the left hand side of the inequality in Corollary 3 is 0, thus the inequality can not hold. Since the inequality can not always be satisfied, Ricci flow is not always strongly parabolic, which makes us unable to assure the existence of the solution according to Theorem 4. In what follows, the non-poarabolic root is analyzed and the solution can be found on the basis of the relationship between the Ricc flow and the Ricci-DeTurck flow. Now, it is possible to understand which parts have an impact on its non-parabolic nature by the linearization of the Ricci curvature tensor. We define the linearization of the Ricci curvature as \(\mathcal{D}[\operatorname{Ric}]\) so that \[\mathcal{D}[\operatorname{Ric}]\left(\frac{\partial}{\partial t}g_{ij}(t) \right)=\frac{\partial}{\partial t}\operatorname{Ric}(g_{ij}(t)).\] **Lemma 5**: _The linearization of \(-2\operatorname{Ric}\) can be rewritten as 6_ Footnote 6: In this paper, we use the Einstein summation convention (for example, \((AB)_{i}^{j}=A_{i}^{k}B_{k}^{j}\)). When the same index appears twice in one term, once as an upper index and the other time as a lower index, summation is automatically taken over this index even without the summation symbol. \[\begin{split}&\mathcal{D}[-2\operatorname{Ric}](h)_{ij}=g^{pq} \nabla_{p}\nabla_{q}h_{ij}+\nabla_{i}V_{j}+\nabla_{j}V_{i}+O(h_{ij})\\ &\text{where }\quad V_{i}=g^{pq}\left(\frac{1}{2}\nabla_{i}h_{pq}- \nabla_{q}h_{pi}\right)\text{and }\ h_{ij}=\frac{\partial}{\partial t}g_{ij}(t).\end{split} \tag{7}\] **Proof** The proofs can be found in Appendix C.1. \(\blacksquare\) By carefully observing Lemma 5, the impact on the non-parabolic of Ricci flow comes from the terms \(V_{i}\) and \(V_{j}\)(Sheridan and Rubinstein, 2006), instead of the term \(g^{pq}\nabla_{p}\nabla_{q}h_{ij}\). On the other hand, the term \(O(h_{ij})\) will have no contributions to the principal symbol of \(-2\operatorname{Ric}\), so we can ignore it in this problem. Next, we try to eliminate the impact of the non-parabolic nature on the Ricci flow. Using a time-dependent diffeomorphism \(\varphi(t):\mathcal{M}\to\mathcal{M}\) (with \(\varphi(0)=\mathrm{id}\)), the pullback metrics \(g(t)\) can be expressed as \[g(t)=\varphi^{*}(t)\bar{g}(t), \tag{8}\] which satisfies the Ricci flow equation, where \(\varphi^{*}(t)\) is the pullback through \(\varphi(t)\). The above formula yields the new metric \(\bar{g}(t)\) via the pullback, and the terms \(V_{i}\) and \(V_{j}\) can be reparameterized by choosing \(\varphi(t)\) to form the Ricci-DeTurck flow (w.r.t. \(\bar{g}(t)\)) that is strongly parabolic. Furthermore, the solution is followed by the DeTurck Trick (DeTurck, 1983) that has a time-dependent reparameterization of the manifold: \[\begin{split}\frac{\partial}{\partial t}\bar{g}(t)& =-2\operatorname{Ric}(\bar{g}(t))-\mathcal{L}_{\frac{\partial \varphi(t)}{\partial t}\bar{g}(t)}\\ \bar{g}(0)&=\bar{g}_{0},\end{split} \tag{9}\] See Appendix C.2 for details. Thus, the Ricci-DeTurck flow has a unique solution for a short time. For the long time behavior, please refer to Appendix C.3. ### Literature For the Riemannian \(n\)-dimensional manifold \((\mathcal{M}^{n},g)\) that is isometric to the Euclidean \(n\)-dimensional space \((\mathbb{R}^{n},\delta)\), Schnurer et al. (Schnurer et al., 2007) have showed the stability of Euclidean space under the Ricci flow for a small \(C^{0}\) perturbation. Koch et al. (Koch and Lamm, 2012) have given the stability of the Euclidean space along with the Ricci flow in the \(L^{\infty}\)-norm. Moreover, for the decay of the \(L^{\infty}\)-norm on Euclidean space, Appleton (Appleton, 2018) has provided a proof using a different method. On the other hand, for a Ricci-flat metric with small perturbations, Guenther et al. (Guenther et al., 2002) proved that such metrics would converge under Ricci flow. Considering the stability of integrable and closed Ricci-flat metrics, Sesum (Sesum, 2006) has proved that the convergence rate is exponential because the spectrum of the Lichnerowicz operator is discrete. Furthermore, Deruelle et al. (Deruelle and Kroncke, 2021) have proved that an asymptotically locally Euclidean Ricci-flat metric is dynamically stable under the Ricci flow, with the \(L^{2}\cap L^{\infty}\) perturbation on non-flat and non-compact Ricci-flat manifolds. Our work is also discussed on Ricci-flat manifolds. ## 3 Neural Networks in LNE Manifolds The aim of this section is to build a LNE manifold via the information geometry (Amari, 2016) such that paving the way for the introduction of Ricci flow. Specifically, we first introduce a convex function (Equation (14)) to derive the LNE divergence (Theorem 10) with the help of Bregman divergence (Definition 8), and then construct the LNE metric (Equation (16)) by introducing the LNE divergence into a neural networks. Consequently, the LNE metric then appears in the steepest descent gradient (Lemma 11) of the LNE manifold. Of course, the mirror descent algorithm (Beck and Teboulle, 2003) can also equivalently construct the relationship between divergences and gradients, but it lacks the geometric meaning (manifold and metric) that is exactly what we need. ### Neural Network Manifold A neural network is composed of a large number of neurons connected with each other. The set of all such networks forms a manifold, where the weights represented by the neuron connections can be regarded as the coordinate system. For the \(n\times n\) matrix, all such matrices form an \(n^{2}\)-dimensional manifold. Specifically, the matrix forms the \(n(n+1)/2\)-dimensional manifold, i.e., a submanifold embedded in the manifold of all the matrices when this matrix is symmetric and positive-definite (Sheridan and Rubinstein, 2006). Note that such a matrix is described as the metric in the geometry theory. Since symmetric and positive-definite matrices have many advantages, which lead to wide implementations in real-world applications, our method will also construct such symmetric and positive-definite matrices (metrics) with some appealing geometric characteristics in neural networks. **Remark 6**: _Comparing straight lines in Euclidean space, geodesics are the straightest possible lines that we can draw in a Riemannian manifold. Given a geodesic, there exists a unique non-Euclidean coordinate system. Once the curved coordinate system is selected in a Riemannian manifold, the symmetric and positive-definite metric is also given based on Definition 2. That geometry-based metric can describe the properties of manifolds, such as curvature (Helgason, 2001)._ ### Euclidean Space and Divergence From the viewpoint of the information geometry, the metric can be deduced by the divergence satisfying the certain criteria (Basseville, 2013), which is summarized as Definition 7. **Definition 7**: _(Amari, 2016) \(D[P:Q]\) is called a **divergence** when it satisfies the following criteria:_ _(1) \(D[P:Q]\geq 0\),_ _(2) \(D[P:Q]=0\) when and only when \(P=Q\),_ _(3) When \(P\) and \(Q\) are sufficiently close to each other, and their coordinates are denoted by \(\boldsymbol{\xi}_{P}\) and \(\boldsymbol{\xi}_{Q}=\boldsymbol{\xi}_{P}+d\boldsymbol{\xi}\) respectively, the Taylor expansion of the divergence can be written as_ \[D[\boldsymbol{\xi}_{P}:\boldsymbol{\xi}_{Q}]=\frac{1}{2}\sum_{i,j}g_{ij}( \boldsymbol{\xi}_{P})d\xi_{i}d\xi_{j}+O(|d\boldsymbol{\xi}|^{3}), \tag{10}\] _and the Riemannian metric \(g_{ij}\) is positive-definite, depending on \(\boldsymbol{\xi}_{P}\)._ When \(P\) and \(Q\) are sufficiently close where these two points are expressed in coordinates as \(\boldsymbol{\xi}_{P}\) and \(\boldsymbol{\xi}_{Q}\) (column vectors), the square of an infinitesimal distance \(ds^{2}\) between them by using Definition 7 can be defined as: \[ds^{2}=2D[\boldsymbol{\xi}_{P}:\boldsymbol{\xi}_{Q}]=\sum_{i,j}g_{ij}( \boldsymbol{\xi}_{P})d\xi_{i}d\xi_{j} \tag{11}\] where \(d\boldsymbol{\xi}\) denotes a sufficiently small coordinate variation between the coordinates \(\boldsymbol{\xi}_{P}\) and \(\boldsymbol{\xi}_{Q}\). Here we can ignore the higher order term \(O(|d\boldsymbol{\xi}|^{3})\) for the sake of simplicity and convenience. A manifold \(\mathcal{M}\) is said to be Riemannian when a postive-definite metric \(g_{ij}\) is defined on \(\mathcal{M}\) and the square of the local distance between two nearby points \(\mathbf{\xi}_{P}\) and \(\mathbf{\xi}_{Q}\) is given by Equation (11). A divergence \(D\) provides \(\mathcal{M}\) with a Riemannian structure. Using an orthonormal Cartesian coordinate system in the Euclidean space, the Euclidean divergence is naturally defined as a half of the square of the Euclidean distance between two nearby points \(\mathbf{\xi}\) and \(\mathbf{\xi}^{\prime}\) \[D_{E}[\mathbf{\xi}:\mathbf{\xi}^{\prime}]=\frac{1}{2}\sum_{i}(\xi_{i}-\xi_{i}^{\prime} )^{2}. \tag{12}\] The Riemannian metric \(g_{ij}\) degenerates into the Euclidean metric \(\delta_{ij}\) in this case, so that \[ds^{2}=2D_{E}[\mathbf{\xi}:\mathbf{\xi}+d\mathbf{\xi}]=\sum_{i}(d\xi)^{2}=\sum_{i,j}\delta _{ij}d\xi_{i}d\xi_{j}. \tag{13}\] Obviously, the Euclidean divergence satisfies the criteria of divergence. Note that the Euclidean metric \(\delta_{ij}\) is the identity matrix \(\mathbf{I}\) where we still retain the notation of metrics for the sake of the representation habit in the geometry theory. ### LNE Manifold and Divergence Recall that 7, in general relativity (Wald, 2010), the complete Riemannian manifold \((\mathcal{M},g)\) that is endowed with a linearly nearly flat spacetime metric, is considered to solve the Newtonian limit by the linearized gravity. The form of this metric is \(g_{ij}=\eta_{ij}+\gamma_{ij}\) where \(\eta_{ij}\) represents a flat Minkowski metric whose background is special relativity and \(\gamma_{ij}\) is smallness. In practice, this is an excellent theory of small gravitational perturbations when gravity is "weak". Footnote 7: The link between the LNE manifold and general relativity can be found in Section 2.2. Similarly, we give a metric \(g_{ij}=\delta_{ij}+\gamma_{ij}\) in Riemannian \(n\)-manifold where \(\delta_{ij}\) represents a flat Euclidean metric. An adequate definition of "smallness" in this context is that the components of \(\gamma_{ij}\) are much smaller than 1 in the global inertial coordinate system of \(\delta_{ij}\). Therefore, we can systematically develop the LNE metric to cope with small perturbations. #### 3.3.1 Convex Function and Bregman Divergence In order to construct the LNE manifold endowed with the LNE metric in the neural network, according to Definition 7, we can introduce the divergence to obtain the expression of the LNE metric, similar to the relationship between Euclidean metric and divergence. The premise of constructing a divergence is to find a suitable convex function. There are many applications for different convex functions in physics and optimization (Bubeck et al., 2015). Thus, we can introduce a nonlinear function \(\Phi(\mathbf{\xi})\) of coordinates \(\mathbf{\xi}\) as the convex function with a certain geometric structure to satisfy the needs of constructing the LNE divergence. When considering a differentiable function, it is said to be convex if and only if its Hessian is positive-definite \[H(\mathbf{\xi})=\left(\frac{\partial^{2}}{\partial\xi_{i}\partial\xi_{j}}\Phi(\bm {\xi})\right).\] **Definition 8**: _(Bregman, 1967) The **Bregman divergence**\(D_{B}[\boldsymbol{\xi}:\boldsymbol{\xi}^{\prime}]\) is defined as the difference between a convex function \(\Phi(\boldsymbol{\xi})\) and its tangent hyperplane \(z=\Phi(\boldsymbol{\xi}^{\prime})+(\boldsymbol{\xi}-\boldsymbol{\xi}^{\prime}) \nabla\Phi(\boldsymbol{\xi}^{\prime})\), depending on the Taylor expansion at the point \(\boldsymbol{\xi}^{\prime}\):_ \[D_{B}[\boldsymbol{\xi}:\boldsymbol{\xi}^{\prime}]=\Phi(\boldsymbol{\xi})-\Phi (\boldsymbol{\xi}^{\prime})-(\boldsymbol{\xi}-\boldsymbol{\xi}^{\prime}) \nabla\Phi(\boldsymbol{\xi}^{\prime}).\] By drawing a tangent hyperplane touching it at the point \(\boldsymbol{\xi}^{\prime}\) \[z=\Phi(\boldsymbol{\xi}^{\prime})+(\boldsymbol{\xi}-\boldsymbol{\xi}^{\prime })\nabla\Phi(\boldsymbol{\xi}^{\prime}),\] we can write the distance between the convex function \(\Phi(\boldsymbol{\xi})\) and the tangent hyperplane \(z\) as the Bregman divergence. Considering it as a function of two points \(\boldsymbol{\xi}\) and \(\boldsymbol{\xi}^{\prime}\), we can easily know that it satisfies the criteria of divergence. Since \(\Phi(\boldsymbol{\xi})\) is convex, the graph of \(\Phi(\boldsymbol{\xi})\) is always above the tangent hyperplane, touching it at \(\boldsymbol{\xi}^{\prime}\). The graph describing their relationship is shown in Figure 3, where \(z\) is the vertical axis of the graph. **Remark 9**: _In fact, a divergence is derived from a convex function in the form of the Bregman divergence. By choosing different convex functions, for example, we can easily obtain the Euclidean divergence from the Bregman divergence, by defining the convex function: \(\Phi(\boldsymbol{\xi})=1/2\sum_{i}\xi_{i}^{2}\) in a Euclidean space. Besides, given the convex function: \(\Phi(\boldsymbol{\xi})=\sum_{i}p(\boldsymbol{x},\xi_{i})\log p(\boldsymbol{x}, \xi_{i})\) where \(\sum_{i}p(\boldsymbol{x},\xi_{i})=\sum_{i}p(\boldsymbol{x},\xi_{i}^{\prime})= \int p(\boldsymbol{x},\boldsymbol{\xi})d\boldsymbol{\xi}=\int p(\boldsymbol{x },\boldsymbol{\xi}^{\prime})d\boldsymbol{\xi}^{\prime}=1\) is satisfied, the Bregman divergence is the same as the KL divergence._ #### 3.3.2 LNE Divergence and Gradient Similar to the Bregman divergence with a convex function, we try to construct a new convex function to obtain the LNE divergence on which the LNE metric emerges naturally on the basis of Definition 7. Inspired by the previous work (Ajanthan et al., 2021), we propose a Figure 3: The divergence \(D[\boldsymbol{\xi}:\boldsymbol{\xi}^{\prime}]\) is viewed as the distance between the convex function \(\Phi(\boldsymbol{\xi})\) and the tangent hyperplane \(z\). And \(z\) depends on the point \(\boldsymbol{\xi}^{\prime}\) at which the supporting hyperplane with normal vector \(\boldsymbol{n}=\nabla\Phi(\boldsymbol{\xi}^{\prime})\) is defined. novel convex function, which satisfies symmetry and is able to geometrically construct an easy-to-compute metric with the linearly nearly Euclidean nature: \[\Phi(\mathbf{\xi})=\sum_{i}\frac{1}{\tau^{2}}\log\left(\cosh(\tau\xi_{i})\right) \tag{14}\] where \(\tau\) is a constant parameter that is used to control how linearly close to Euclidean structure. **Theorem 10**: _By introducing a convex function \(\Phi\) defined by Equation (14) into Definition 8, the **LNE divergence** between two points \(\mathbf{\xi}\) and \(\mathbf{\xi}^{\prime}\) yields_ \[\begin{split} D_{LNE}[\mathbf{\xi}^{\prime}:\mathbf{\xi}]& =\sum_{i}\left[\frac{1}{\tau^{2}}\log\frac{\cosh(\tau\xi_{i}^{ \prime})}{\cosh(\tau\xi_{i})}-\frac{1}{\tau}(\xi_{i}^{\prime}-\xi_{i})\tanh( \tau\xi_{i})\right]\\ &\approx\frac{1}{2}\sum_{i,j}\left\{\delta_{ij}-\left[\tanh(\tau \mathbf{\xi})\tanh(\tau\mathbf{\xi})^{\top}\right]_{ij}d\xi_{i}d\xi_{j}\right\}.\end{split} \tag{15}\] The detailed proofs can be found in Appendix E.1. Comparing with Definition 7, we know that the **LNE metric** corresponding to the LNE divergence is \[\begin{split}& g(\mathbf{\xi})=\delta_{ij}-\left[\tanh(\tau\mathbf{ \xi})\tanh(\tau\mathbf{\xi})^{\top}\right]_{ij}\\ &=\begin{bmatrix}1-\tanh(\tau\xi_{1})\tanh(\tau\xi_{1})&\cdots&-\tanh(\tau\xi_{1}) \tanh(\tau\xi_{n})\\ \vdots&\ddots&\vdots\\ -\tanh(\tau\xi_{n})\tanh(\tau\xi_{1})&\cdots&1-\tanh(\tau\xi_{n})\tanh(\tau\xi_{n}) \end{bmatrix}.\end{split} \tag{16}\] Based on Section 3.1, we are able to employ the parameters of a neural network to construct the LNE metric (for a neural network, the coordinate \(\mathbf{\xi}\) is the parameter vector). As a result, we can describe the neural network in the LNE manifold, as measured by the LNE divergence based on Theorem 10. Naturally, the steepest descent gradient in the LNE manifold is given by Lemma 11, which is similar to the natural gradient in Definition 1. **Lemma 11**: _The steepest descent gradient measured by the LNE divergence is defined as_ \[\tilde{\partial}_{\mathbf{\xi}}=g(\mathbf{\xi})^{-1}\partial_{\mathbf{\xi}}=\left[\delta -\tanh(\tau\mathbf{\xi})\tanh(\tau\mathbf{\xi})^{\top}\right]^{-1}\partial_{\mathbf{\xi}}. \tag{17}\] The proofs can be found in Appendix E.2. On the constructed LNE manifold, we can introduce the Ricci flow to decay the metric perturbation w.r.t. the LNE metric. ### Convergence Analysis We can now describe the mirror descent via the Bregman divergence associated to \(\Phi\). Let \(\mathcal{C}\subset\mathbb{R}^{n}\) be a convex open set such that \(\mathcal{X}\subset\bar{\mathcal{C}}\) (\(\bar{\mathcal{C}}\) is the closure of \(\mathcal{C}\)), and \(\mathcal{X}\cap\mathcal{C}\neq\emptyset\). When \(\mathbf{x}^{0}\in\arg\min_{\mathbf{x}\in\mathcal{X}\cap\mathcal{C}}\Phi(\mathbf{x})\) is the initial point, for iteration \(k\geq 0\) and step size \(\eta\), the update of the mirror descent can be written as: \[\begin{split}\nabla\Phi(\mathbf{y}^{k+1})&=\nabla\Phi( \mathbf{x}^{k})-\eta\partial^{k}\\ \mathbf{x}^{k+1}&=\arg\min_{\mathbf{x}\in\mathcal{X}\cap \mathcal{C}}D_{B}[\mathbf{x},\mathbf{y}^{k+1}]\end{split} \tag{18}\] where \(\partial^{k}\in\partial F(\mathbf{x}^{k})\) and \(\mathbf{y}^{k+1}\in\mathcal{C}\). **Theorem 12**: _(Bubeck et al., 2015) Let \(\Phi\) be a mirror map \(\rho\)-strongly convex on \(\mathcal{X}\cap\mathcal{C}\) w.r.t. \(\|\cdot\|\). Let \(R^{2}=\sup_{\mathbf{x}\in\mathcal{X}\cap\mathcal{C}}\Phi(\mathbf{x})-\Phi(\mathbf{x}^{0})\), and \(F\) be convex and \(L\)-Lipschitz w.r.t. \(\|\cdot\|\). Then mirror descent with \(\eta=\frac{R}{L}\sqrt{\frac{2\rho}{t}}\) satisfies_ \[F\left(\frac{1}{t}\sum_{k=0}^{t-1}\mathbf{x}^{k}\right)-F(\mathbf{x}^{*})\leq RL\sqrt {\frac{2}{\rho t}}.\] In this paper, the mirror descent is obtained by taking Equation (14) on \(\mathcal{C}=\mathbb{R}^{n}\). The function \(\Phi(\mathbf{\xi})\) is a mirror map \(1\)-strongly convex w.r.t. \(\|\cdot\|_{2}\), and furthermore the associated Bregman divergence is given by Theorem 10. In that case, the mirror descent is equivalent to projected subgradient descent, and the rate of convergence is \(O(1/\sqrt{t})\), which is consistent with the standard mirror descent (Theorem 12). Of course, this gives the same convergence as (Ajanthan et al., 2021). It is easy to verify that the projection w.r.t. this Bregman divergence on the simplex \(\Delta_{n}=\{\xi\in\mathbb{R}^{n}:\sum_{i=1}^{n}\xi_{i}=1\}\) amounts to a simple renormalization (Bubeck et al., 2015). For \(\mathcal{X}=\Delta_{n}\), one has \(\mathbf{x}^{0}=(1/n,\cdots,1/n)\) and \(R^{2}=\log(\cosh n)\). In that case, the mirror descent with the Equation (14) achieves a rate of convergence of order \(\sqrt{\frac{\log(\cosh n)}{t}}\). ## 4 Evolution of LNE Manifolds under Ricci Flow In this section, we will focus on LNE metrics under Ricci flow, and prove that the evolution of LNE manifolds can achieve good performances in terms of stability (all time), i.e., the Ricci flow can exponentially decay the \(L^{2}\)-norm perturbation to the LNE metric. ### Relationship between LNE Metrics and Ricci Flow To facilitate the handling of the metric perturbation, we have given the LNE metric in Equation (16), a special form of Ricci-flat metrics (Guenther et al., 2002; Deruelle and Kroncke, 2021), which means that the Ricci-flat metrics on noncompact manifolds are LNE. In particular, the constructed metric \(g(\mathbf{\xi})\) in Equation (16) is LNE, which is equivalent to either \(g_{0}\) under the Ricci flow or \(\bar{g}_{0}\) under the Ricci-DeTurck flow since \(g_{0}\) and \(\bar{g}_{0}\) are diffeomorphic 8 to each other on the basis of Equation (8). The full statement of the LNE metric is the linearly nearly Euclidean Ricci-flat metric in Definition 13 given on the basis of the previous work (Deruelle and Kroncke, 2021). A complete Riemannian \(n\)-manifold \((\mathcal{M}^{n},\bar{g}_{0})\) is said to be LNE with one end of order \(\iota>0\) if there exists a compact set \(K\subset\mathcal{M}\), a radius \(r\), a point \(x\) in \(\mathcal{M}\) and a diffeomorphism satisfying \(\phi:\mathcal{M}\backslash K\rightarrow(\mathbb{R}^{n}\backslash B(x,r))/SO(n)\). Note that \(B(x,r)\) is the ball and \(SO(n)\) is a finite group acting freely on \(\mathbb{R}^{n}\backslash\{0\}\). Then \[\left|\partial^{k}(\phi_{*}\gamma)\right|_{\delta}=O(r^{-\iota-k})\ \ \forall k\geq 0 \tag{19}\] holds on \((\mathbb{R}^{n}\backslash B(x,r))/SO(n)\). The LNE metric \(\bar{g}_{0}\) can be linearly decomposed into a form containing the Euclidean metric \(\delta\) and the deviation \(\gamma\): \[\bar{g}_{0}=\delta+\gamma. \tag{20}\] ### Finite Time Existence In this subsection, we consider the linear stability and integrability of the LNE manifold \((\mathcal{M}^{n},g_{0})\). Fortunately, similar to the previous proofs (Koiso, 1983; Besse, 2007), we can know that \((\mathcal{M}^{n},g_{0})\) is integral and linearly stable based on Definition 29 and Definition 30. Due to the diffeomorphic relationship between the Ricci flow and the Ricci-DeTurck flow, we introduce a metric perturbation for the Ricci-DeTurck flow, and Equation (9) can be further reformulated as follows: \[\begin{split}\frac{\partial}{\partial t}\bar{g}(t)& =-2\operatorname{Ric}(\bar{g}(t))-\mathcal{L}_{\frac{\partial \epsilon(t)}{\partial t}}\bar{g}(t)\\ \bar{g}(0)&=\bar{g}_{0}+d\end{split} \tag{21}\] where \(d=\bar{g}(0)-\bar{g}_{0}\) is a metric perturbation deviated from the LNE metric \(\bar{g}_{0}\). Corollary 15 guarantees the finite time existence of the Ricci-DeTurck flow w.r.t. \(L^{2}\)-norm perturbations, and then provides the necessary premise for proving its all time convergence. (Bamler, 2010, 2011) Let \((\mathcal{M}^{n},\bar{g}_{0})\) be a complete Ricci-flat \(n\)-manifold. If \(\bar{g}(0)\) is a metric satisfying \(\|\bar{g}(0)-\bar{g}_{0}\|_{L^{\infty}}<\epsilon\) where \(\epsilon>0\), then there exists a constant \(C<\infty\) and a unique Ricci-DeTurck flow \(\bar{g}(t)\) that satisfies \[\|\bar{g}(t)-\bar{g}_{0}\|_{L^{\infty}}<C\|\bar{g}(0)-\bar{g}_{0}\|_{L^{ \infty}}<C\cdot\epsilon. \tag{22}\] Let \((\mathcal{M}^{n},\bar{g}_{0})\) be the LNE \(n\)-manifold. For a Ricci-DeTurck flow \(\bar{g}(t)\) on a maximal time interval \(t\in[0,T)\) and \(k\in\mathbb{N}\), there exists constants \(C_{k}=C_{k}(\bar{g}_{0},T)\) such that \[\|\nabla^{k}d(t)\|_{L^{2}}\leq C_{k}\cdot t^{-k/2} \tag{23}\] where \(d(t)=\bar{g}(t)-\bar{g}_{0}\) is the time-evolving perturbation. ProofWhen Lemma 14 satisfies in a finite time, according to (Deruelle and Kroncke, 2021), the Ricci-DeTurck flow with the LNE metric w.r.t. the \(L^{2}\)-norm perturbation exists. ### All Time Convergence for \(L^{2}\)-norm Perturbations In order to prove the all time stability of LNE metrics under Ricci-DeTurck flow, we need to construct \(\bar{g}_{0}(t)\) that is a family of Ricci-flat reference metrics with \(\frac{\partial}{\partial t}\bar{g}_{0}(t)=O((\bar{g}(t)-\bar{g}_{0}(t))^{2})\). Let \[\mathcal{F}=\left\{\bar{g}(t)\in\mathcal{M}^{n}\;\middle|\;2\operatorname{Ric} (\bar{g}(t))+\mathcal{L}_{\frac{\partial\varphi(t)}{\partial t}}\bar{g}(t)=0\right\}\] be the set of stationary points under the Ricci-DeTurck flow. Then, we are able to establish a manifold via an \(L^{2}\)-neighbourhood \(\mathcal{U}\) of integral \(\bar{g}_{0}\) in the space of metrics \[\tilde{\mathcal{F}}=\mathcal{F}\cap\mathcal{U}. \tag{24}\] For all \(\bar{g}\in\tilde{\mathcal{F}}\), the terms \(\operatorname{Ric}(\bar{g}(t))=0\) and \(\mathcal{L}_{\frac{\partial\varphi(t)}{\partial t}}\bar{g}(t)=0\) hold individually, based on the previous work (Deruelle and Kroncke, 2021). In this way, \(d(t)-d_{0}(t)=\bar{g}(t)-\bar{g}_{0}(t)\) holds because we write \(d_{0}(t)=\bar{g}_{0}(t)-\bar{g}_{0}\). According to Theorem 3.1, it is obvious that the \(L^{2}\)-norm metric perturbation w.r.t. the LNE metric can be dynamically decayed by the Ricci-DeTurck flow. See the details in Appendix D. **Theorem 3.1**: _Let \((\mathcal{M}^{n},\bar{g}_{0})\) be the LNE \(n\)-manifold which is linearly stable and integrable. For any metric \(\bar{g}(t)\in\mathcal{B}_{L^{2}}(\bar{g}_{0},\epsilon_{2})\) where a constant \(\epsilon_{2}>0\), there is a complete Ricci-DeTurck flow \((\mathcal{M}^{n},\bar{g}(t))\) starting from \(\bar{g}(t)\) converging to the LNE metric \(\bar{g}(\infty)\in\mathcal{B}_{L^{2}}(\bar{g}_{0},\epsilon_{1})\) where \(\epsilon_{1}\) is a small enough constant._ **Proof** By Lemma 3.1, we have a constant \(\epsilon_{2}>0\) such that \(d(t)\in\mathcal{B}_{L^{2}}(0,\epsilon_{2})\) holds. By Lemma 3.1 (in the second step) and Corollary 3.1 (in the third step), we can obtain \[\left\|d_{0}(T)\right\|_{L^{2}}\leq C\int_{1}^{T}\left\|\frac{ \partial}{\partial t}d_{0}(t)\right\|_{L^{2}}\mathrm{d}t\] \[\quad\leq C\int_{1}^{T}\left\|\nabla^{\bar{g}_{0}}\left(d(t)-d_{ 0}(t)\right)\right\|_{L^{2}}^{2}\mathrm{d}t\] \[\quad\leq C\left\|d(1)-d_{0}(1)\right\|_{L^{2}}^{2}\leq C\|d(1) \|_{L^{2}}^{2}\leq C\cdot\left(\epsilon_{2}\right)^{2}.\] Furthermore, we can obtain from the above formulas \[\left\|d(T)-d_{0}(T)\right\|_{L^{2}}\leq\left\|d(1)-d_{0}(1)\right\|_{L^{2}} \leq C\cdot\epsilon_{2}.\] By the triangle inequality, we get \[\left\|d(T)\right\|_{L^{2}}\leq C\cdot\left(\epsilon_{2}\right)^{2}+C\cdot \epsilon_{2}.\] Followed by Corollary 3.1 and Lemma 3.1, \(T\) should be pushed further outward, i.e., \[\lim_{t\to+\infty}\sup\left\|\frac{\partial}{\partial t}d_{0}(t)\right\|_{L^ {2}}\leq\lim_{t\to+\infty}\sup\left\|\nabla^{\bar{g}_{0}}\left(d(t)-d_{0}(t) \right)\right\|_{L^{2}}^{2}=0.\] Thus, \(\bar{g}(t)\) will converge to \(\bar{g}(\infty)=\bar{g}_{0}+d_{0}(\infty)\) as \(t\) approaches \(+\infty\) based on the elliptic regularity. In other words, \(d(t)-d_{0}(t)\) will converge to \(0\) as \(t\) goes to \(+\infty\) w.r.t. all Sobolev norms (Minerbe, 2009), \[\lim_{t\to+\infty}\left\|d(t)-d_{0}(t)\right\|_{L^{2}}\leq\lim_{t\to+\infty}C \left\|\nabla^{\bar{g}_{0}}\left(d(t)-d_{0}(t)\right)\right\|_{L^{2}}=0.\] Any Ricci-DeTurck flow that starting close to the LNE metric exists for all time, and the Ricci-DeTurck flow will converge to the LNE metric following with (Deruelle and Kroncke, 2021). ### Perturbation Analysis By proving the finite time existence of the Ricci-DeTurck flow with \(L^{2}\)-norm perturbations (Corollary 15), we then prove that \(L^{2}\)-norm perturbations w.r.t. the LNE metric converges for all time under the Ricci-DeTurck flow (Theorem 16). Based on (Sesum, 2006), we further yield \(|\bar{g}(t)-\bar{g}_{0}(\infty)|<Ce^{-\epsilon_{2}t}\) such that the metric perturbation exponentially converges. Recall that we can reparameterize \(\bar{g}(t)\) to \(g(t)=\varphi^{*}(t)\bar{g}(t)\) via the pullback. Since the perturbation entirely comes from \(\bar{g}(t)\) and has nothing to do with the time-dependent diffeomorphism \(\varphi^{*}(t)\) in essence, it also exponentially converges for \(g(t)\) under the Ricci flow when the existence of the solution of Ricci flow satisfies. And in Section 3, the metric \(g(\mathbf{\xi})=\delta_{ij}-\left[\tanh(\tau\mathbf{\xi})\tanh(\tau\mathbf{\xi})^{\top} \right]_{ij}\) we construct for the neural network is a kind of LNE metrics (based on Definition 13), so the perturbation for this metric satisfies the exponential decay under the Ricci flow. In what follows, the gradient mismatch in DNNs can be effectively overcome by employing the LNE manifolds under Ricci flow. ## 5 Discretized Neural Networks in LNE Manifolds Up to now, we have theoretically solved the problem of gradient mismatch. Specifically, we have completed the construction of LNE manifolds for neural networks in Section 3 and exponentially decayed the metric perturbation in LNE manifolds in Section 4. However, based on Lemma 11, there is the inverse of the LNE metric for the steepest descent gradient in the LNE manifold, which makes it difficult to be calculated in practice. Therefore, in this section, our aim is to approximate the inverse of the LNE metric, and further obtain the approximated gradient in the LNE manifold, that leads to the practical algorithm for training DNNs in the LNE manifold. ### Gradient Computation in Discretized Neural Networks Recall that Courbariaux et al. (2016) applied STE to binarized neural networks, whose form is given in Equation (1). Then, Zhou et al. (2016) applied STE to arbitrary bit-width discretized neural networks. Similarly, the generalization of STE in discretized neural networks yields \[\frac{\partial L}{\partial\mathbf{w}}=\frac{\partial L}{\partial Q(\mathbf{w})}\cdot \mathbb{I}. \tag{25}\] There is still a contradiction to be solved before the LNE manifold is introduced into the DNN. Based on Lemma 11, the LNE manifold is defined on the parameter \(\mathbf{\xi}\) of all layers in a neural network. However, the gradient in back-propagation is defined layer-by-layer, i.e., on the weight \(\mathbf{w}\) of each layer, which causes that the gradient update can not be associated with the LNE manifold. Fortunately, the LNE manifold can be layer-by-layer redefined by replacing \(\mathbf{\xi}\) with \(\mathbf{w}\), which is equivalent to defining the LNE manifold for each layer. And on the basis of Lemma 11, the steepest descent gradient is rewritten as \[\tilde{\partial}_{\mathbf{w}}=g^{-1}(\mathbf{w})\partial_{\mathbf{w}}=\left[\delta-\tanh( \tau\mathbf{w})\tanh(\tau\mathbf{w})^{\top}\right]^{-1}\partial_{\mathbf{w}}, \tag{26}\] which can be used for the gradient computation in DNNs, i.e., \[\frac{\tilde{\partial}L}{\tilde{\partial}\mathbf{w}}=\left[\delta-\tanh(\tau\mathbf{w} )\tanh(\tau\mathbf{w})^{\top}\right]^{-1}\frac{\partial L}{\partial Q(\mathbf{w})}. \tag{27}\] Further, the proposed gradient in above means that we are in the framework of solving the problem of gradient mismatch, similar to Equation (5). Note that the metric at this time is layer-by-layer LNE. On the other hand, gradient computation involves the inverse of the LNE metric, which greatly consumes computing resources. Hence, we propose two methods for approximating the gradient of DNNs in LNE manifolds: weak approximation and strong approximation, respectively. The approximated gradient is defined as the direction in parameter space that gives the largest variation in the objective per unit of variation along the layer-by-layer LNE manifold. ### Strong Approximation For the \(n\times n\) symmetric metric \(g(\mathbf{w})\), it can be decomposed in terms of the combination of entries \(P\) and \(A\), where \(P\) is the entries made up of the elements of the lower triangular matrix that contains \(n(n-1)/2\) real parameters and \(A\) is the entries made up of the elements of the diagonal matrix that contains \(n\) real parameters. Our aim is to approximate the inverse of the LNE metric, and further approximate the gradient in Equation (27). Based on the universal approximation theorem (Cybenko, 1989; Figure 4: Flow chart of strong approximation of \(g^{-1}(\mathbf{w})\). The new entries \(\tilde{P}\) and \(\tilde{A}\) produced by neural network form a matrix \(\mathbf{G}\), which will multiply by the metric \(g(\mathbf{w})\). As the loss function defined by Equation (28) decreases, the matrix \(\mathbf{G}\) can be used to approximate the inverse of the metric \(g(\mathbf{w})\). Hornik, 1991), a continuous function on compact subsets can be approximated by a neural network with a single hidden layer and a finite number of neurons (Jejjala et al., 2020). Inspired by this work, we also introduce a multi-layer perceptron (MLP) neural network, as shown in Figure 4, to minimize the loss function \[\tilde{L}=\|\mathbf{I}-g(\mathbf{w})\mathbf{G}\|^{2}. \tag{28}\] Hence, the matrix \(\mathbf{G}\) can be used to strongly approximate the inverse of the metric \(g(\mathbf{w})\). ### Weak Approximation In this subsection, we can also present a weak approximation for the inverse of the LNE metric with the efficient calculation. **Definition 17**: _(Bhatia, 2013) For \(\mathbf{A}\in\mathcal{R}^{n\times n}\), \(\mathbf{A}\) is called **diagonally dominant** when it satisfies_ \[\big{|}a_{ii}\big{|}>\sum_{j=1,j\neq i}^{n}\big{|}a_{ij}\big{|},\ \ \ i=1,2, \ldots,n.\] **Definition 18**: _(Bhatia, 2013) If \(\mathbf{A}\in\mathcal{R}^{n\times n}\) is a diagonally dominant matrix, then \(\mathbf{A}\) is a nonsingular matrix together, i.e., \(\mathbf{A}^{-1}\) exists._ Considering the properties of the LNE metric, we can easily make \(g^{-1}(\mathbf{w})\) satisfy Definition 17 by adjusting the parameter \(\tau\). Furthermore, the existence of the inverse of the LNE metric can be guaranteed on the basis of Definition 18. By placing a higher requirement, the LNE metric is diagonally dominant. According to Corollary 19, the weak approximation of the gradient in the LNE manifold can be calculated, giving rise to a nice feature to facilitate the fast computation of the inverse. **Corollary 19**: _Based on Definition 17 and Definition 18, the weak approximation of the gradient in the LNE manifold is defined as_ \[\tilde{\partial}_{\mathbf{w}}=\left[\delta-\tanh(\tau\mathbf{w})\tanh(\tau\mathbf{w})^{ \top}\right]^{-1}\partial_{\mathbf{w}}\approx\left[\delta+\tanh(\tau\mathbf{w})\tanh (\tau\mathbf{w})^{\top}\right]\partial_{\mathbf{w}} \tag{29}\] _if the LNE metric is diagonally dominant._ **Proof** Considering the inverse of the LNE metric, due to the diagonally dominant property in Definition 17 and Definition 18, we can approximate \(\left[\delta-\tanh(\tau\mathbf{w})\tanh(\tau\mathbf{w})^{\top}\right]^{-1}\) by ignoring the fourth-order small quantity \(\sum O(\rho_{a}\rho_{b}\rho_{c}\rho_{d})\), i.e., \[\left[\delta-\tanh(\tau\mathbf{w})\tanh(\tau\mathbf{w})^{\top}\right] \left[\delta+\tanh(\tau\mathbf{w})\tanh(\tau\mathbf{w})^{\top}\right]\] \[=\begin{bmatrix}1-\rho_{1}\rho_{1}&-\rho_{1}\rho_{2}&\cdots\\ -\rho_{2}\rho_{1}&1-\rho_{2}\rho_{2}&\cdots\\ \vdots&\vdots&\ddots\end{bmatrix}\begin{bmatrix}1+\rho_{1}\rho_{1}&\rho_{1} \rho_{2}&\cdots\\ \rho_{2}\rho_{1}&1+\rho_{2}\rho_{2}&\cdots\\ \vdots&\vdots&\ddots\end{bmatrix}\] \[=\begin{bmatrix}1-\sum O(\rho_{a}\rho_{b}\rho_{c}\rho_{d})&\rho_ {1}\rho_{2}-\rho_{1}\rho_{2}-\sum O(\rho_{a}\rho_{b}\rho_{c}\rho_{d})&\cdots \\ -\rho_{2}\rho_{1}+\rho_{2}\rho_{1}-\sum O(\rho_{a}\rho_{b}\rho_{c}\rho_{d})&1- \sum O(\rho_{a}\rho_{b}\rho_{c}\rho_{d})&\cdots\\ \vdots&\vdots&\ddots\end{bmatrix}\approx\mathbf{I}.\] ### Training ``` 0: A minibatch of inputs and targets \((\mathbf{x}=\mathbf{a_{0}},\mathbf{y})\), \(\mathbf{\xi}\) mapped to \((\mathbf{W}_{1},\mathbf{W}_{2},\ldots,\mathbf{W}_{l})\), \(\mathbf{\hat{\xi}}\) mapped to \(\left(\mathbf{\hat{W}}_{1},\mathbf{\hat{W}}_{2},\ldots,\mathbf{\hat{W}}_{l}\right)\), a nonlinear function \(f\), a constant factor \(\tau\) and a learning rate \(\eta\). 0: The updated discretized parameters \(\mathbf{\hat{\xi}}\). 1: {Forward propagation} 2:for\(i=1;i\leq l;i++\)do 3: Discretize \(\mathbf{\hat{W}}_{i}=Q(\mathbf{W}_{i})\); 4: Compute \(\mathbf{s}_{i}=\mathbf{\hat{W}}_{i}\mathbf{\hat{a}}_{i-1}\); 5: Discretize \(\mathbf{\hat{a}}_{i}=Q\left(f\odot\mathbf{s}_{i}\right)\); 6:endfor 7: {Loss derivative} 8: Compute \(L=L(\mathbf{y},\mathbf{z})\); 9: Compute \(\partial_{\mathbf{a}_{l}}L=\frac{\partial L(\mathbf{y},\mathbf{z})}{\partial\mathbf{z}}|_{\bm {z}=\hat{\mathbf{a}}_{l}}\); 10: {Backward propagation} 11:for\(i=l;i\geq 1;i--\)do 12: Compute \(\partial_{\mathbf{s}_{i}}L=\partial_{\mathbf{a}_{i}}L\odot f^{\prime}(\mathbf{s}_{i})\); 13: Compute \(\partial_{\hat{\mathbf{W}}_{i}}L=(\nabla_{\mathbf{s}_{i}}L)\,\mathbf{\hat{a}}_{i-1}^{\top}\); 14: Compute \(\tilde{\partial}_{\mathbf{W}_{i}}L=g^{-1}(\mathbf{W}_{i})\partial_{\hat{\mathbf{W}}_{i}}L\) based on Equation (27); 15: Compute \(\partial_{\hat{\mathbf{a}}_{i-1}}L=\hat{\mathbf{W}}_{i}^{\top}\left(\partial_{\mathbf{s} _{i}}L\right)\); 16:endfor 17: {The parameters update} 18:for\(i=l;i\geq 1;i--\)do 19: Update \(\mathbf{W}_{i}\leftarrow\mathbf{W}_{i}-\eta\cdot\tilde{\partial}_{\mathbf{W}_{i}}L\); 20:endfor 21: Update \(\mathbf{\hat{\xi}}=\left[\operatorname{vec}\left(\mathbf{\hat{W}}_{1}\right)^{\top}, \operatorname{vec}\left(\mathbf{\hat{W}}_{2}\right)^{\top},\ldots,\operatorname{ vec}\left(\mathbf{\hat{W}}_{l}\right)^{\top}\right]^{\top}\); ``` **Algorithm 1** An algorithm for the training of DNNs in the LNE manifold. We represent the gradient in the LNE manifold as \(\tilde{\partial}\). For brevity, we ignore the normalization operation (Ioffe and Szegedy, 2015; Ba et al., 2016). Consequently, based on the previous work (Courbariaux et al., 2016), we give the practical algorithm of DNNs in the LNE manifold. As shown in Algorithm 1, this algorithm is similar to the general algorithm of DNNs, except for Line 14. Recall that, in Figure 1, the general DNNs use STE to simply copy the gradient, i.e., \(\tilde{\partial}_{\mathbf{W}_{i}}L=\partial_{\hat{\mathbf{W}}_{i}}L\), but we match the gradient by introducing the LNE metric. Furthermore, we can practically calculate this gradient in Line 14 via the strong approximation or the weak approximation in above. ## 6 Ricci Flow Discretized Neural Networks In this section, we propose Ricci flow discretized neural networks (RF-DNNs). When the Ricci flow is introduced, it implies that the background of DNNs discussed here is the LNE manifold. Our aim is to provide a practical solution for the metric perturbation and further for the problem of gradient mismatch. Therefore, we will focus on how to calculate the discrete Ricci flow in practice rather than just staying on theoretical analysis. To relate the Ricci flow to neural networks, we need to discretize the Ricci flow and choose a suitable coordinate system. For the left hand side of the Ricci flow, we have established the connection between the LNE metric and neural networks in Section 3. Note that we use the form of the LNE metric in the calculation, and such metrics at this time are with perturbations. For the right hand side of the Ricci flow, we need to calculate the Ricci curvature tensor with the coordinate system. And this coordinate system is the key to linking the Ricci curvature and neural networks. Specifically, we define a method for calculating the Ricci curvature in such a way that the selection of coordinate systems is related to the input transformations, which implies that the Ricci curvature in neural networks reflects the effect of different input transformations on the parameter. ### Ricci Curvature in Neural Networks Now we consider the Ricci curvature tensor on the Riemannian metric \(g\). According to Appendix A, its coordinate form can be expressed as \[-2\operatorname{Ric}(g)=-2R^{i}_{ikj}=2R^{i}_{kij} \tag{30}\] \[=g^{ip}\left(\partial_{i}\partial_{k}g_{pj}-\partial_{i} \partial_{j}g_{pk}+\partial_{p}\partial_{j}g_{ik}+\partial_{p}\partial_{k}g_{ ij}\right).\] In order to relate the Ricci curvature to neural networks, we define a method for calculating the Ricci curvature in such a way that the selection of coordinate systems is related to the input transformations. When the Ricci curvature is equal to zero, it means that different input transformations will not cause variations of the parameter. Inspired by the previous work (Kaul and Lall, 2019), we can treat the terms \(\partial_{i}\) and \(\partial_{p}\) as variations for the translation and rotation of each input, respectively. In general, since the data augmentation does not involve the rotation in real-world applications such as image classification tasks (He et al., 2016; Shorten and Khoshgoftaar, 2019), we consider the translation instead of the rotation by discarding the index \(p\), i.e., \(\partial_{p}(\partial_{j}g_{ik}+\partial_{k}g_{ij})=0\) for the fairness of ablation studies. When considering one of the translation and rotation, \(g^{ip}\) will degenerate into \(\delta^{ip}\) (identity matrix). Subsequently, \(\partial_{i}\partial_{k}g\) and \(\partial_{i}\partial_{j}g\) can be treated as variations for the row and column transformation of the input data w.r.t. the metric \(g\), respectively. Further the Ricci curvature can be rewritten as \[-2\operatorname{Ric}(g)=\partial_{i}\partial_{k}g_{pj}-\partial_{i}\partial_{ j}g_{pk}. \tag{31}\] **Remark 20**: _The choice of \(i\) and \(p\) is arbitrary (including \(k\) and \(j\)), and can even be other coordinate representations. Here, we just give it a specific geometric meaning by considering the characteristic of the image classification task._ As shown in Figure 5, with the help of Equation (31), we yield the Ricci curvature with coordinate systems in terms of the difference equation: \[-2\operatorname{Ric}(g)=\frac{g|_{k_{1}}-g|_{k_{2}}}{k_{1}-k_{2}}-\frac{g|_{j_{1} }-g|_{j_{2}}}{j_{1}-j_{2}} \tag{32}\] where we approximate partial derivatives with difference equations (Kaul and Lall, 2019), i.e., \(\partial_{i}\partial_{k}g=(g|_{k_{1}}-g|_{k_{2}})/(k_{1}-k_{2})\) and \(\partial_{i}\partial_{j}g=(g|_{j_{1}}-g|_{j_{2}})/(j_{1}-j_{2})\) corresponding to the input translation dimensions \(k\) and \(j\), respectively. Note that \(g|_{k_{1}}\), \(g|_{k_{2}}\), \(g|_{j_{1}}\), and \(g|_{j_{2}}\) are four metric structures under different small translation transformations \(k_{1}\), \(k_{2}\), \(j_{1}\), and \(j_{2}\) respectively. In general, \((k_{1}-k_{2})\) and \((j_{1}-j_{2})\) are translations less than 4 pixels, which is consistent with data augmentation (He et al., 2016). ### Existence of Discrete Ricci Flow in Neural Networks Recall that we previously considered the Ricci-DeTurck flow instead of the Ricci flow as the solution of the Ricci flow does not always exist based on Section 2.3. If we can guarantee that the solution of Ricci flow exists in neural networks, then we can use the Ricci flow to exponentially decay the metric perturbation based on Section 4.4. We have considered the right hand side of the Ricci flow in Equation (6), i.e., the Ricci curvature tensor. Now we define the equivalent form of the left hand side of the Ricci flow Figure 5: By feeding the original image into the neural network and performing a forward and backward on the linear layer to update the weights \(\mathbf{w}\), we can construct the metric structure \(g(\mathbf{w})\) based on Section 5.1. In addition, we first process 4 different small translation transformations (\(k_{1}\), \(k_{2}\), \(j_{1}\), and \(j_{2}\)) on the original image, and then input them into the neural network. By sequentially performing a forward and backward, we can obtain four metric structures (\(g|_{k_{1}}\), \(g|_{k_{2}}\), \(g|_{j_{1}}\), and \(g|_{j_{2}}\)) under the corresponding translations. Combined with these metric structures, we can present the Ricci curvature structure \(\operatorname{Ric}(g)\). in terms of the difference equation: \[\frac{\partial}{\partial t}g(t):=g(t+1)-g(t). \tag{33}\] where \(t\in\{0,1,\cdots,T-1\}\) is a uniform partition of the interval \([0,T]\). In the training of neural networks, \(T\) is the total number of iterations. As the limit \(T\rightarrow\infty\), the above formula holds on. Combining Equation (32) and Equation (33) in neural networks, we present the expression of the discrete Ricci flow in terms of the difference equation: \[\begin{split} g(t+1)|_{k_{1}}-g(t)|_{k_{1}}&=\frac {g(t)|_{k_{1}}-g(t)|_{k_{2}}}{k_{1}-k_{2}}-\frac{g(t)|_{j_{1}}-g(t)|_{j_{2}}}{ j_{1}-j_{2}}\\ g(0)|_{k_{1}}&=\delta-\tanh(\tau\mathbf{w})\tanh(\tau \mathbf{w})^{\top}\end{split} \tag{34}\] To ensure the existence of the solution of the discrete Ricci flow, we are able to achieve this goal by adding a regularization into the loss function to constrain the discrete Ricci flow in DNNs. Following Equation (34), we further present the regularization: \[N=\left\|g(t+1)|_{k_{1}}-g(t)|_{k_{1}}-\frac{g(t)|_{k_{1}}-g(t)|_{k_{2}}}{k_ {1}-k_{2}}+\frac{g(t)|_{j_{1}}-g(t)|_{j_{2}}}{j_{1}-j_{2}}\right\|_{L^{2}}^{ 2}, \tag{35}\] where \(g(t)\) is \(\epsilon\)-close to the LNE metric \(g_{0}\) based on Definition 21. In other words, \(g(t)\) is LNE metric with perturbations. **Definition 21**: _(Sheridan and Rubinstein, 2006) Let \(g(t)\) be the metrics on the LNE manifold. For \(\epsilon>0\), \(\mathcal{B}_{L^{2}}(g_{0},\epsilon)\) is the \(\epsilon\)-ball with respect to the \(L^{2}\)-norm induced by \(g_{0}\) and centred at \(g_{0}\), where any metric \(g(t)\in\mathcal{B}_{L^{2}}(g_{0},\epsilon)\) is \(\epsilon\)-close to \(g_{0}\) if_ \[(1+\epsilon)^{-1}g_{0}\leq g(t)\leq(1+\epsilon)g_{0}\] _in the sense of matrices._ By constraining the regularization \(N\) in DNNs, the solution of the discrete Ricci flow exists when \(N\to 0\). Simultaneously, the metric perturbation will exponentially converges (\(g(t)\to g_{0}\)) as the evolution of the discrete Ricci flow. ### Algorithm Design By applying the constraints of the discrete Ricci Flow in layer-by-layer LNE manifold, we can finally alleviate the problem of gradient mismatch. Since the background is the LNE manifold, we can construct the satisfied gradient on the basis of Equation (27). Note that, at this time, the metric is time-dependent under the Ricci flow, i.e., \(g_{\mathbf{w}}(t)\). With the indicator function \(\mathbb{I}\) to represent the constraints of the discrete Ricci flow, we yield the gradient under the discrete Ricci flow as follows: \[\tilde{\partial}_{\mathbf{w}}L=g_{\mathbf{w}}^{-1}(t)\partial_{Q(\mathbf{w})}\cdot \mathbb{I},\ \ \text{where}\ \ \mathbb{I}:=\left\{\begin{array}{ll}1&\text{ if }\left|N\right|\leq \varepsilon\\ 0&\text{ otherwise}\end{array}\right. \tag{36}\] where we define a small constant \(\varepsilon\). Furthermore, due to the error in estimating the the regularization with finite training time, we allow an \(\varepsilon\)-bounded error in the above formulation, which also implies that the cases are less than or equal to \(\varepsilon\) (rather than strictly equal to \(0\)), all satisfy the discrete Ricci flow. The overall process is shown in Algorithm 2. Compared with Algorithm 1, we add Line 7 and Line 15. In Line 7, we need to calculate the regularization to ensure that the solution of discrete Ricci flow exists. On the other hand, in Line 15, we calculate the gradient in the LNE manifold under the discrete Ricci flow, unlike Algorithm 1 which just calculates the gradient in the LNE manifold with perturbations. When we apply the Ricci flow, it means that the LNE manifold at this time is dynamic and anti-perturbation. **Remark 22**: _In addition to using the discretized weight and activation, DNNs need to save the non-discretized weight and activation for gradient update. Note that the gradient of a DNN is non-discretized._ ### Complexity Analysis Based on Algorithm 2, one knows that the forward time complexity is about \(\mathcal{O}(n^{2})\) where the time complexity of Line 4 and Line 5 are about \(\mathcal{O}(n^{2})\) and \(\mathcal{O}(n)\), respectively. In the backward pass, the time complexity of Line 13 is about \(\mathcal{O}(n)\). Then the time complexity of the gradients w.r.t. the weight (Line 15) and activation (Line 17) both are about \(\mathcal{O}(n^{2})\). Therefore, the backward time complexity is about \(\mathcal{O}(2n^{2})\). For a training process of a neural network, its total complexity is \(\mathcal{O}(n^{2})\). Since the computation of the Ricci curvature involves four different translations of input data w.r.t. the metric, its time complexity is about \(\mathcal{O}(n^{2})\). In this way, the updated weights are only used to calculate the constraints of discrete Ricci flow, and the final weights can be obtained by a backward pass again. The time complexity of Line 16 is \(\mathcal{O}(n^{2})\) when we use the weak approximation to calculate the gradient. Thus, the total complexity of our RF-DNN is still \(\mathcal{O}(n^{2})\). Hence, the complexity of the RF-DNN keep consistency with the general neural network. ## 7 Experiments In this section, we design ablation studies to compare our RF-DNN 9 trained from scratch with other STE methods. Furthermore, given a pre-trained model, we evaluate the performance of the RF-DNN in comparison with several representative training-based methods on classification benchmark datasets. All the experiments implemented in Python are conducted with PyTorch (Paszke et al., 2019). The hardware environment is conducted on a Workstation with an Intel(R) Xeon(R) Silver 4214 CPU(2.20 GHz), GeForce GTX 2080Ti GPU and 128GB RAM. Footnote 9: In order to calculate the gradient conveniently, we use the weak approximation of the inverse of the LNE metric in all experiments. ### Experimental Settings The two datasets used in our experiments are introduced as follows. **CIFAR datasets:** There are two CIFAR benchmarks (Krizhevsky et al., 2009) consisting of natural color images with 32 \(\times\) 32 pixels, respectively, 50k training and 10k test images, and we pick out 5k training images as a validation set from the training set. CIFAR-10 consists of images organized into 10 classes and CIFAR-100 into 100 classes. We adopt a standard data augmentation scheme (random corner cropping and random flipping) that is widely used for these two datasets. We normalize the images with the means of the channel and standard deviations in preprocessing. **ImageNet dataset:** The ImageNet benchmark (Russakovsky et al., 2015) consists of 1.2 million high-resolution natural images, where the validation set contains 50k images. These images are organized into 1000 categories of the objects for training, which are resized to 224 \(\times\) 224 pixels before fed into the network. In the next experiments, we report our single-crop evaluation results using top-1 and top-5 accuracies. We specify the discrete function, the composition of which has a significant influence on the performance and computation of DNNs. Specifically, the discrete function is able to simplify the calculations which are also varied depending on the different discrete values, e.g., fixed-point multiplication, SHIFT operation (Elhoushi et al., 2019) and XNOR operation (Rastegari et al., 2016) etc. We mark \(Q^{1}\) as the 1-bit discrete function: \[Q^{1}(\cdot)=\operatorname{sign}(\cdot)=\{-1,+1\}. \tag{37}\] The \(k\)-bit, over 1-bit, discrete function can be marked as \(Q^{k}\), \[Q^{k>1}(\cdot)=\frac{2}{2^{k}-1}\operatorname{round}\left[(2^{k}-1)\left( \frac{\cdot}{2\max\lvert\cdot\rvert}+\frac{1}{2}\right)\right]-1 \tag{38}\] where \(\operatorname{round}[\cdot]\) is the rounding function and \(\max\lvert\cdot\rvert\) means to calculate the absolute value of the input first, and then find its maximum value. In this way, a DNN using discrete function \(Q^{1}(\cdot)\) can be calculated with XNOR operation and using discrete function \(Q^{k>1}(\cdot)\) can be calculated with fixed-point multiplication. ### Ablation Studies with STE Methods In order to illustrate the superiority of RF-DNN against the problem of gradient mismatch, we compare our RF-DNN with three other methods by training from scratch. In Table 1, Table 2 and Table 3, we mark \(\{-1,+1\}\) in '**Forward**', which indicates that the weights are binarized using Equation (37), i.e., \(-1\) or \(+1\), in the forward of DNNs. In the backward, the methods (Dorefa (Zhou et al., 2016), MultiFCG (Chen et al., 2019) and FCGrad (Chen et al., 2019)) use the different approximated gradients to update the weights. Here, we apply different ResNet models (He et al., 2016) to compare ablation studies. The batch normalization with a batch size of 128 is used in the learning strategy, and Nesterov momentum of 0.9 (Dozat, 2016) is used in SGD optimization. For CIFAR, we set total training epochs as 200 and set a weight decay of 0.0005 where the learning rate is lowered by 10 times at epoch 80, 150, and 190, with the initial value of 0.1. For ImageNet, we set total training epochs as 100 and set the learning rate of each parameter group using a cosine annealing schedule with a weight decay of 0.0001. All experiments are conducted for 5 times, and the statistics of the last 10/5 epochs' test accuracies are reported for a fair comparison. Hence, we evaluate the accuracy performance in terms of (mean \(\pm\) std). Note that we perform standard data augmentation and pre-processing on CIFAR and ImageNet datasets. In Table 1, Table 2 and Table 3, we use the same the discrete function \(Q^{1}(\cdot)\), parameter setting and optimizer for fairness in the forward. The only difference is the gradient in the backward propagation. The performance in various models and datasets shows that our RF-DNN has significant improvement over other STE methods. The average results of multiple experiments are better than that of other methods, which may benefit from the alleviation of the gradient mismatch to make the loss function of DNNs more fully descended. In addition, the minor variances show that our training method is relatively stable, which also confirms our point of view. ### Convergence and Stability Analysis Since the standard deviations can reflect the convergence and stability of training to a certain extent, we visualize the Table 1 as shown in Figure 6(a). Intuitively, compared to Dorefa and MultiFCG, our proposed RF-DNN can better alleviate the perturbations caused by the gradient mismatch to achieve more stable performance. Furthermore, we also present the accuracy performance of RF-DNN with different bit width weight representations in Figure 6(b). And we see fairly consistent stability across different bit width and backbone models. As shown in Figure 7, the RF-DNN can achieve higher accuracies than Dorefa on CIFAR100 dataset, i.e., 1.25% higher on the training dataset with ResNet56, 1.85% higher on the test dataset with ResNet56, 1.97% higher on the training dataset with ResNet110 \begin{table} \begin{tabular}{c c c c c} \hline \hline **Network** & **Forward** & **Backward** & **Test Acc (\%)** & **FP Acc (\%)** \\ \hline \multirow{3}{*}{ResNet20} & & Dorefa & 88.28\(\pm\)0.81 & \multirow{3}{*}{91.50} \\ & & MultiFCG & 88.94\(\pm\)0.46 & & \\ & & RF-DNN & **89.83\(\pm\)0.23** & \\ \hline \multirow{3}{*}{ResNet32} & & Dorefa & 90.23\(\pm\)0.63 & \multirow{3}{*}{92.13} \\ & & MultiFCG & 89.63\(\pm\)0.38 & & \\ & & RF-DNN & **90.75\(\pm\)0.19** & \\ \hline \multirow{3}{*}{ResNet44} & & Dorefa & 90.71\(\pm\)0.58 & \multirow{3}{*}{93.56} \\ & & MultiFCG & 90.54\(\pm\)0.21 & & \\ \cline{1-1} & & RF-DNN & **91.63\(\pm\)0.11** & \\ \hline \hline \end{tabular} \end{table} Table 1: The experimental results on CIFAR10 with ResNet20/32/44. The accuracy of full-precision (FP) baseline is reported by (Chen et al., 2019). \begin{table} \begin{tabular}{c c c c c} \hline \hline **Network** & **Forward** & **Backward** & **Test Acc (\%)** & **FP Acc (\%)** \\ \hline \multirow{3}{*}{ResNet56} & \multirow{3}{*}{\{\(-\)1,+1\}} & Dorefa & 66.71\(\pm\)2.32 & \multirow{3}{*}{71.22} \\ & & MultiFCG & 66.58\(\pm\)0.37 & \\ & & FCGrad & 66.56\(\pm\)0.35 & \\ & & RF-DNN & **68.56\(\pm\)0.32** & \\ \hline \multirow{3}{*}{ResNet110} & \multirow{3}{*}{\{\(-\)1,+1\}} & Dorefa & 68.15\(\pm\)0.50 & \multirow{3}{*}{72.54} \\ & & MultiFCG & 68.27\(\pm\)0.14 & \\ \cline{1-1} & & FCGrad & 68.74\(\pm\)0.36 & \\ \cline{1-1} & & RF-DNN & **69.20\(\pm\)0.28** & \\ \hline \hline \end{tabular} \end{table} Table 2: The experimental results on CIFAR100 with ResNet56/110. The accuracy of full-precision (FP) baseline is reported by (Chen et al., 2019). Figure 6: Accuracy performance (mean \(\pm\) std) for ResNet20/32/44 on CIFAR10. The line and bar represent the mean and standard deviation of the different random seed results, respectively. (a) We compare our RF-DNN with Dorefa and MultiFCG via the 1bit weight representation, which is also the visualization of Table 1. (b) We present our RF-DNN via different bit width weight representations. Note that higher mean and lower deviation always implies better convergence and stability. Figure 7: Training and test curves of ResNet56/110 on CIFAR100 compared between Dorefa and RF-DNN. Intuitively, RF-DNN has a more stable training performance than Dorefa. and 1.05% higher on the test dataset with ResNet110. In addition, it can be seen from the fluctuation of the test curves in Figure 7 that the RF-DNN has a tremendous improvement compared with the Dorefa on the stability of training. From the training curve, our method significantly outperforms Dorefa, of course, this needs to be combined with the test curve to explain the superiority of RF-DNN. From the test curve, the accuracy of our method is always higher than that of Dorefa, and the corresponding volatility has improved. The experimental results verify that our theoretical framework is an effective solution against the gradient mismatch, further improving the training performance of DNNs. ### Comparisons with Training-based Methods In this experiment, based on a full-precision pre-trained model, we compare the RF-DNN with several state-of-the-art DNNs, e.g., DeepShift (Elhoushi et al., 2019), QN (Yang et al., 2019), ADMM (Leng et al., 2018), MetaQuant (Chen et al., 2019), INT8 (Zhu et al., 2020), SR+DR (Gysel et al., 2018), ELQ (Zhou et al., 2018), MD (Ajanthan et al., 2021) and RQ (Louizos et al., 2019), under the same bit width using Equation (37) or Equation (38). Note that **W** and **A** represent the bit width of weights and activations respectively in Table 4. Consequently, the experimental results show that our RF-DNN is able to achieve better performance than other recent state-of-the-art training-based methods, which seems to benefit from our effective solution of the gradient mismatch. ## 8 Conclusion and Future Work Traditional discretized neural networks (DNNs) tell us that the weights can only take low-precision discrete values, as well as the activations. This makes DNNs' memory footprint very light compared to full-precision floating point networks. However, it makes their training difficult: to maintain discrete weights, in general, the gradient w.r.t. discrete weights is approximated with the Straight-Through Estimator (STE), which causes the update weight to differ from the gradient w.r.t. continuous weights (_gradient mismatch_). This paper proposes a novel analysis of the gradient mismatch phenomenon through the lens of duality theory. This mismatch is then viewed as a metric perturbation in a Riemannian manifold. In theory, on the basis of the information geometry, we construct the LNE manifold for neural networks, such that forming the background to effectively \begin{table} \begin{tabular}{c c c c c} \hline \hline **Network** & **Forward** & **Backward** & **Test Top1/Top5 (\%)** & **FP Top1/Top5 (\%)** \\ \hline \multirow{4}{*}{ResNet18} & & Dorefa & 58.34\(\pm\)2.07/81.47\(\pm\)1.56 & \multirow{4}{*}{69.76/89.08} \\ & & MultiFCG & 59.47\(\pm\)0.02/82.41\(\pm\)0.01 & & \\ \cline{1-1} & & FCGrad & 59.83\(\pm\)0.36/82.67\(\pm\)0.23 & & \\ \cline{1-1} & & RF-DNN & **60.83\(\pm\)0.41/83.54\(\pm\)0.18** & & \\ \hline \hline \end{tabular} \end{table} Table 3: The experimental results on ImageNet with ResNet18. The accuracy of full-precision (FP) baseline is reported by (Chen et al., 2019). deal with a metric perturbation. By revealing the stability of LNE metrics with the \(L^{2}\)-norm perturbation under the Ricci-DeTurck flow, we subsequently introduce the Ricci flow Discretized Neural Network (RF-DNN) in practice by using the constraints of the discrete Ricci flow in the LNE manifold to alleviate the metric perturbation with the exponential convergence rate, giving rise to an appealing solution for STE in DNNs. Experimentally, our RF-DNN achieves improvements in both the stability and performance of DNNs. In this paper, information geometry is a very important part of combining the geometric tool (Ricci flow) and neural networks. Our future research will continue to explore the \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**W**} & \multirow{2}{*}{**A**} & \multicolumn{2}{c}{**Top-1**} & \multicolumn{2}{c}{**Top-5**} \\ \cline{5-7} & & & Accuracy & Gap & Accuracy & Gap \\ \hline **AlexNet** (Original) & 32 & 32 & 56.52\% & - & 79.07\% & - \\ \hline RF-DNN (ours) & 6 & 32 & **56.39\%** & \(\mathbf{-0.13\%}\) & **78.78\%** & \(\mathbf{-0.29\%}\) \\ DeepShift (Elhoushi et al., 2019) & 6 & 32 & 54.97\% & \(\mathbf{-1.55\%}\) & 78.26\% & \(\mathbf{-0.81\%}\) \\ \hline **ResNet18** (Original) & 32 & 32 & 69.76\% & - & 89.08\% & - \\ \hline RF-DNN (ours) & 1 & 32 & **67.05\%** & \(\mathbf{-2.71\%}\) & **88.09\%** & \(\mathbf{-0.99\%}\) \\ MD (Ajanthan et al., 2021) & 1 & 32 & 66.78\% & \(\mathbf{-2.98\%}\) & 87.01\% & \(\mathbf{-2.07\%}\) \\ ELQ (Zhou et al., 2018) & 1 & 32 & 66.21\% & \(\mathbf{-3.55\%}\) & 86.43\% & \(\mathbf{-2.65\%}\) \\ ADMM (Leng et al., 2018) & 1 & 32 & 64.80\% & \(\mathbf{-4.96\%}\) & 86.20\% & \(\mathbf{-2.88\%}\) \\ QN (Yang et al., 2019) & 1 & 32 & 66.50\% & \(\mathbf{-3.26\%}\) & 87.30\% & \(\mathbf{-1.78\%}\) \\ MetaQuant (Chen et al., 2019) & 1 & 32 & 63.44\% & \(\mathbf{-6.32\%}\) & 84.77\% & \(\mathbf{-4.31\%}\) \\ RF-DNN (ours) & 4 & 4 & **66.75\%** & \(\mathbf{-3.01\%}\) & **87.02\%** & \(\mathbf{-2.06\%}\) \\ RQ ST (Louizos et al., 2019) & 4 & 4 & 62.46\% & \(\mathbf{-7.30\%}\) & 84.78\% & \(\mathbf{-4.30\%}\) \\ \hline **ResNet50** (Original) & 32 & 32 & 76.13\% & - & 92.86\% & - \\ \hline RF-DNN (ours) & 8 & 8 & **76.07\%** & \(\mathbf{-0.06\%}\) & **92.87\%** & \(\mathbf{+0.01\%}\) \\ INT8 (Zhu et al., 2020) & 8 & 8 & 75.87\% & \(\mathbf{-0.26\%}\) & - & - \\ \hline **MobileNet** (Original) & 32 & 32 & 70.61\% & - & 89.47\% & - \\ \hline RF-DNN (ours) & 5 & 5 & **61.32\%** & \(\mathbf{-9.29\%}\) & **84.08\%** & \(\mathbf{-5.39\%}\) \\ SR+DR (Gysel et al., 2018) & 5 & 5 & 59.39\% & \(\mathbf{-11.22\%}\) & 82.35\% & \(\mathbf{-7.12\%}\) \\ RQ ST (Louizos et al., 2019) & 5 & 5 & 56.85\% & \(\mathbf{-13.76\%}\) & 80.35\% & \(\mathbf{-9.12\%}\) \\ RF-DNN (ours) & 8 & 8 & **70.76\%** & \(\mathbf{+0.15\%}\) & **89.54\%** & \(\mathbf{+0.07\%}\) \\ RQ (Louizos et al., 2019) & 8 & 8 & 70.43\% & \(\mathbf{-0.18\%}\) & 89.42\% & \(\mathbf{-0.05\%}\) \\ \hline \hline \end{tabular} \end{table} Table 4: The classification accuracy results on ImageNet and comparison with other training-based methods, with AlexNet (Krizhevsky et al., 2012), ResNet18, ResNet50 and MobileNet (Howard et al., 2017). Note that the accuracy of full-precision baseline is reported by (Elhoushi et al., 2019). connection between neural networks and manifolds and aim to introduce geometric ideas to solve practical problems in machine learning. ## Acknowledgments We thank all reviewers and the editor for excellent contributions. ## Appendix A Differential Geometry 1. Riemann curvature tensor (Rm) is a (1,3)-tensor defined for a 1-form \(\omega\): \[R^{l}_{ijk}\omega_{l}=\nabla_{i}\nabla_{j}\omega_{k}-\nabla_{j}\nabla_{i}\omega_ {k}\] where the covariant derivative of \(F\) satisfies \[\nabla_{p}F^{j_{1}\dots j_{l}}_{i_{1}\dots i_{k}}=\partial_{p}F^{j_{1}\dots j_ {l}}_{i_{1}\dots i_{k}}+\sum_{s=1}^{l}F^{j_{1}\dots q\dots j_{l}}_{i_{1}\dots i _{k}}\Gamma^{j_{s}}_{pq}-\sum_{s=1}^{k}F^{j_{1}\dots j_{l}}_{i_{1}\dots q\dots i _{k}}\Gamma^{q}_{pi}.\] In particular, coordinate form of the Riemann curvature tensor is: \[R^{l}_{ijk}=\partial_{i}\Gamma^{l}_{jk}-\partial_{j}\Gamma^{l}_{ik}+\Gamma^{p }_{jk}\Gamma^{l}_{ip}-\Gamma^{p}_{ik}\Gamma^{l}_{jp}.\] 2. Christoffel symbol in terms of an ordinary derivative operator is: \[\Gamma^{k}_{ij}=\frac{1}{2}g^{kl}(\partial_{i}g_{jl}+\partial_{j}g_{il}- \partial_{l}g_{ij}).\] 3. Ricci curvature tensor (Ric) is a (0,2)-tensor: \[R_{ij}=R^{p}_{pij}.\] 4. Scalar curvature is the trace of the Ricci curvature tensor: \[R=g^{ij}R_{ij}.\] 5. Lie derivative of \(F\) in the direction \(\frac{d\varphi(t)}{dt}\): \[\mathcal{L}_{\frac{d\varphi(t)}{dt}}F=\left(\frac{d}{dt}\varphi^{*}(t)F\right) _{t=0}\] where \(\varphi(t):\mathcal{M}\rightarrow\mathcal{M}\) for \(t\in(-\epsilon,\epsilon)\) is a time-dependent diffeomorphism of \(\mathcal{M}\) to \(\mathcal{M}\). ## Appendix B Notation For clarity of definitions in this paper, we list the important notations as shown in Table 5. ## Appendix C Proof for the Ricci Flow ### Proof for Lemma 5 **Lemma 23**: _The linearization of the Ricci curvature tensor is given by_ \[\mathcal{D}[\mathrm{Ric}](h)_{ij}=-\frac{1}{2}g^{pq}(\nabla_{p}\nabla_{q}h_{ ij}+\nabla_{i}\nabla_{j}h_{pq}-\nabla_{q}\nabla_{i}h_{jp}-\nabla_{q}\nabla_{j}h_{ip}).\] Based on Appendix A, we have \[\nabla_{q}\nabla_{i}h_{jp}=\nabla_{i}\nabla_{q}h_{jp}-R^{r}_{qij}h_{rp}-R^{r} _{qip}h_{jm}.\] Combining with Lemma 23, we can obtain the deformation equation because of \(\nabla g=0\), \[\mathcal{D}[-2\text{Ric}](h)_{ij}= g^{pq}\nabla_{p}\nabla_{q}h_{ij}+\nabla_{i}\left(\frac{1}{2} \nabla_{j}h_{pq}-\nabla_{q}h_{jp}\right)+\nabla_{j}\left(\frac{1}{2}\nabla_{i}h _{pq}-\nabla_{q}h_{ip}\right)+O(h_{ij})\] \[= g^{pq}\nabla_{p}\nabla_{q}h_{ij}+\nabla_{i}V_{j}+\nabla_{j}V_{i} +O(h_{ij}).\] \begin{table} \begin{tabular}{|p{113.8pt} p{113.8pt} p{113.8pt}|p{113.8pt}|} \hline \(\boldsymbol{W}_{i}\): & weight matrix for the \(i\)-th layer & \(\boldsymbol{\hat{W}}_{i}\): & discretized weight matrix for the \(i\)-th layer \\ \hline \(\boldsymbol{w}\): & vectorized weights in each layer & \(\boldsymbol{\hat{w}}\): & discretized vectorized weights in each layer \\ \hline \(\boldsymbol{a}_{i}\): & activation vector for the \(i\)-th layer & \(\boldsymbol{\hat{a}}_{i}\): & discretized activation vector for the \(i\)-th layer \\ \hline \(\boldsymbol{\xi}\): & parameter vector & \(\boldsymbol{\bar{\xi}}\): & discretized parameter vector \\ \hline \(Q^{1}\): & 1-bit discrete function & \(Q^{k>1}\): & k-bit discrete function (over 1-bit) \\ \hline \(\delta\): & Euclidean metric (identity matrix) & \(\Phi\): & convex function \\ \hline \(g_{0}\): & LNE metric under Ricci flow & \(\bar{g}_{0}\): & LNE metric under Ricci-DeTurck flow \\ \hline \(g\) or \(g(t)\): & the metrics under Ricci flow & \(\bar{g}\) or \(\bar{g}(t)\): & the metrics under Ricci-DeTurck flow \\ \hline \(g(0)\): & initial metric under Ricci flow & \(\bar{g}(0)\): & initial metric under Ricci-DeTurck flow \\ \hline \(d(0)\): & the initial perturbation & \(d(t)\): & the time-evolving perturbation \\ \hline \(D\): & divergence & \(L\) or \(\bar{L}\): & loss function \\ \hline \(L_{g_{0}}\): & Lichnerowicz operator & \(L^{2}\) or \(L^{\infty}\): & norm \\ \hline \(\partial\): & partial derivative & \(\nabla\): & covariant derivative \\ \hline \(\mathcal{L}\): & Lie derivative & \(\Delta_{g_{0}}\): & the Laplacian \\ \hline Rm: & Riemann curvature tensor & \(f\): & nonlinear function \\ \hline Ric: & Ricci curvature tensor & \(\mathcal{D}[\text{Ric}]\): & the linearization of the Ricci curvature tensor \\ \hline \(\varphi^{*}\): & pullback & \(\phi_{*}\): & pushforward \\ \hline \(B(x,r)\): & the ball with a radius \(r\) and a point \(x\in\mathcal{M}\) & \(\mathcal{B}_{L^{2}}(\bar{g}_{0},\epsilon)\): & the \(\epsilon\)-ball with respect to the \(L^{2}\)-norm induced by \(\bar{g}_{0}\) and centred at \(\bar{g}_{0}\) \\ \hline \end{tabular} \end{table} Table 5: Definitions of notations ### Description of the DeTurck Trick Based on the chain rule for the Lie derivative in Appendix A, we can calculate \[\frac{\partial}{\partial t}g(t) =\frac{\partial\left(\varphi^{*}(t)\bar{g}(t)\right)}{\partial t}\] \[=\left(\frac{\partial\left(\varphi^{*}(t+\tau)\bar{g}(t+\tau) \right)}{\partial\tau}\right)_{\tau=0}\] \[=\left(\varphi^{*}(t)\frac{\partial\bar{g}(t+\tau)}{\partial \tau}\right)_{\tau=0}+\left(\frac{\partial\left(\varphi^{*}(t+\tau)\bar{g}(t) \right)}{\partial\tau}\right)_{\tau=0}\] \[=\varphi^{*}(t)\frac{\partial}{\partial t}\bar{g}(t)+\varphi^{*} (t)\mathcal{L}_{\frac{\partial\varphi(t)}{\partial t}}\bar{g}(t)\] where \(\frac{\partial\varphi(t)}{\partial t}\) is equal to \(V(t)\)(Sheridan and Rubinstein, 2006). With the help of Equation (6), we have the following expression for the pullback metric \(g(t)\) \[\frac{\partial}{\partial t}g(t)=\varphi^{*}(t)\frac{\partial}{\partial t} \bar{g}(t)+\varphi^{*}(t)\mathcal{L}_{\frac{\partial\varphi(t)}{\partial t}} \bar{g}(t)=-2\operatorname{Ric}(\varphi^{*}(t)\bar{g}(t))=-2\varphi^{*}(t) \operatorname{Ric}(\bar{g}(t)). \tag{39}\] The diffeomorphism invariance of the Ricci curvature tensor is used in the last step. The above equation is equivalent to \[\frac{\partial}{\partial t}\bar{g}(t)=-2\operatorname{Ric}(\bar{g}(t))- \mathcal{L}_{\frac{\partial\varphi(t)}{\partial t}}\bar{g}(t).\] Based on Definition 24, we further yield \[\frac{\partial}{\partial t}\bar{g}(t)=-2\operatorname{Ric}(\bar{g}(t))-\nabla _{i}V_{j}-\nabla_{j}V_{i}.\] **Definition 24**: _(Sheridan and Rubinstein, 2006) On a Riemannian manifold \((\mathcal{M},g)\), we have_ \[(\mathcal{L}_{X}g)_{ij}=\nabla_{i}X_{j}+\nabla_{j}X_{i},\] _where \(\nabla\) denotes the Levi-Civita connection of the metric \(g\), for any vector field \(X\)._ ### Curvature Explosion at Singularity In general, we present the behavior of Ricci flow in finite time and show that the evolution of the curvature is close to divergence. The core demonstration is followed with Theorem 28. **Theorem 25**: _(Sheridan and Rubinstein, 2006) Given a smooth Riemannian metric \(g_{0}\) on a closed manifold \(\mathcal{M}\), there exists a maximal time interval \([0,T)\) such that a solution \(g(t)\) of the Ricci flow, with \(g(0)=g_{0}\), exists and is smooth on \([0,T)\), and this solution is unique._ **Theorem 26**: _Let \(\mathcal{M}\) be a closed manifold and \(g(t)\) a smooth time-dependent metric on \(\mathcal{M}\), defined for \(t\in[0,T)\). If there exists a constant \(C<\infty\) for all \(x\in\mathcal{M}\) such that_ \[\int_{0}^{T}\left|\frac{\partial}{\partial t}g_{x}(t)\right|_{g(t)}dt\leq C, \tag{40}\] _then the metrics \(g(t)\) converge uniformly as \(t\) approaches \(T\) to a continuous metric \(g(T)\) that is uniformly equivalent to \(g(0)\) and satisfies_ \[e^{-C}g_{x}(0)\leq g_{x}(T)\leq e^{C}g_{x}(0). \tag{41}\] **Proof** Considering any \(x\in\mathcal{M}\), \(t_{0}\in[0,T)\), \(V\in T_{x}\mathcal{M}\), we have \[\left|\log\left(\frac{g_{x}(t_{0})(V,V)}{g_{x}(0)(V,V)}\right)\right| =\left|\int_{0}^{t_{0}}\frac{\partial}{\partial t}\left[\log g_{x} (t)(V,V)\right]dt\right|\] \[=\left|\int_{0}^{t_{0}}\frac{\frac{\partial}{\partial t}g_{x}(t)( V,V)}{g_{x}(t)(V,V)}dt\right|\] \[\leq\int_{0}^{t_{0}}\left|\frac{\partial}{\partial t}g_{x}(t) \left(\frac{V}{|V|_{g(t)}},\frac{V}{|V|_{g(t)}}\right)\right|dt\] \[\leq\int_{0}^{t_{0}}\left|\frac{\partial}{\partial t}g_{x}(t) \right|_{g(t)}dt\] \[\leq C.\] By exponentiating both sides of the above inequality, we have \[e^{-C}g_{x}(0)(V,V)\leq g_{x}(t_{0})(V,V)\leq e^{C}g_{x}(0)(V,V).\] This inequality can be rewritten as \[e^{-C}g_{x}(0)\leq g_{x}(t_{0})(V,V)\leq e^{C}g_{x}(0)(V,V)\] because it holds for any \(V\). Thus, the metrics \(g(t)\) are uniformly equivalent to \(g(0)\). Consequently, we have the well-defined integral: \[g_{x}(T)-g_{x}(0)=\int_{0}^{T}\frac{\partial}{\partial t}g_{x}(t)dt.\] We can show that this integral is well-defined from two perspectives. Firstly, as long as the metrics are smooth, the integral exists. Secondly, the integral is absolutely integrable. Based on the norm inequality induced by \(g(0)\), we can obtain \[\left|g_{x}(T)-g_{x}(t)\right|_{g(0)}\leq\int_{t}^{T}\left|\frac{\partial}{ \partial t}g_{x}(t)\right|_{g(0)}dt.\] For each \(x\in\mathcal{M}\), the above integral will approach zero as \(t\) approaches \(T\). Since \(\mathcal{M}\) is compact, the metrics \(g(t)\) converge uniformly to a continuous metric \(g(T)\) which is uniformly equivalent to \(g(0)\) on \(\mathcal{M}\). Moreover, we can show that \[e^{-C}g_{x}(0)\leq g_{x}(T)\leq e^{C}g_{x}(0).\] **Corollary 27**: _Let \((\mathcal{M},g(t))\) be a solution of the Ricci flow on a closed manifold. If \(|\operatorname{Rm}|_{g(t)}\) is bounded on a finite time \([0,T)\), then \(g(t)\) converges uniformly as \(t\) approaches \(T\) to a continuous metric \(g(T)\) which is uniformly equivalent to \(g(0)\)._ **Proof** The bound on \(|\operatorname{Rm}|_{g(t)}\) implies one on \(|\operatorname{Ric}|_{g(t)}\). Based on Equation (6), we can extend the bound on \(|\frac{\partial}{\partial t}g(t)|_{g(t)}\). Therefore, we obtain an integral of a bounded quantity over a finite interval is also bounded, by Theorem 26. **Theorem 28**: _If \(g_{0}\) is a smooth metric on a compact manifold \(\mathcal{M}\), the Ricci flow with \(g(0)=g_{0}\) has a unique solution \(g(t)\) on a maximal time interval \(t\in[0,T)\). If \(T<\infty\), then_ \[\lim_{t\to T}\left(\sup_{x\in\mathcal{M}}|\operatorname{Rm}_{x}(t)| \right)=\infty. \tag{42}\] **Proof** For a contradiction, we assume that \(|\operatorname{Rm}_{x}(t)|\) is bounded by a constant. It follows from Corollary 27 that the metrics \(g(t)\) converges smoothly to a smooth metric \(g(T)\). Based on Theorem 25, it is possible to find a solution to the Ricci flow on \(t\in[0,T)\), as the smooth metric \(g(T)\) is uniformly equivalent to the initial metric \(g(0)\). Hence, we can extend the solution of the Ricci flow after the time point \(t=T\), which contradicts the choice of \(T\) as the maximal time for the existence of the Ricci flow on \([0,T)\). In other words, \(|\operatorname{Rm}_{x}(t)|\) is unbounded. As approaching the singular time \(T\), the Riemann curvature \(|\operatorname{Rm}|_{g(t)}\) becomes no longer convergent and tends to explode. ## Appendix D Proof for All Time Convergence in LNE Manifolds **Definition 29**: _(Deruelle and Kroncke, 2021) A complete LNE \(n\)-manifold \((\mathcal{M}^{n},g_{0})\) is said to be linearly stable if the \(L^{2}\) spectrum of the Lichnerowicz operator \(L_{g_{0}}:=\Delta_{g_{0}}+2\operatorname{Rm}(g_{0})*\) is in \((-\infty,0]\) where \(\Delta_{g_{0}}\) is the Laplacian, when \(L_{g_{0}}\) acting on \(d_{ij}\) satisfies_ \[L_{g_{0}}(d) =\Delta_{g_{0}}d+2\operatorname{Rm}(g_{0})*d \tag{43}\] \[=\Delta_{g_{0}}d+2\operatorname{Rm}(g_{0})_{iklj}d_{mn}g_{0}^{km }g_{0}^{ln}.\] **Definition 30**: _(Deruelle and Kroncke, 2021) A \(n\)-manifold \((\mathcal{M}^{n},g_{0})\) is said to be integrable if a neighbourhood of \(g_{0}\) has a smooth structure._ We rewrite the Ricci-DeTurck flow (21) as an evolution of the difference \(d(t):=\bar{g}(t)-\bar{g}_{0}\), such that \[\frac{\partial}{\partial t}d(t) =\frac{\partial}{\partial t}\bar{g}(t)=-2\operatorname{Ric}(\bar{ g}(t))+2\operatorname{Ric}(\bar{g}_{0})+\mathcal{L}_{\frac{\partial\varphi^{\prime}(t) }{\partial t}}\bar{g}_{0}-\mathcal{L}_{\frac{\partial\varphi(t)}{\partial t}} \bar{g}(t) \tag{44}\] \[=\Delta d(t)+\operatorname{Rm}*d(t)+F_{\bar{g}^{-1}}*\nabla^{\bar {g}_{0}}d(t)*\nabla^{\bar{g}_{0}}d(t)+\nabla^{\bar{g}_{0}}\left(G_{\Gamma(\bar {g}_{0})}*d(t)*\nabla^{\bar{g}_{0}}d(t)\right),\] where the tensors \(F\) and \(G\) depend on \(\bar{g}^{-1}\) and \(\Gamma(\bar{g}_{0})\). Note that \(\bar{g}_{0}\) is the LNE metric which satisfies the above formula. In the follwing, we denote \(\|\cdot\|_{L^{2}}\) or \(\|\cdot\|_{L^{\infty}}\) as the \(L^{2}\)-norm or \(L^{\infty}\)-norm w.r.t. the LNE metric \(\bar{g}_{0}\), and mark generic constants as \(C\) or \(C_{1}\). **Lemma 31**: _Let \(\bar{g}(t)\) be a Ricci-DeTurck flow on a maximal time interval \(t\in(0,T)\) in an \(L^{2}\)-neighbourhood of \(\bar{g}_{0}\). We have the following estimate:_ \[\left\|\frac{\partial}{\partial t}d_{0}(t)\right\|_{L^{2}}\leq C\left\|\nabla^ {\bar{g}_{0}(t)}\left(d(t)-d_{0}(t)\right)\right\|_{L^{2}}^{2}. \tag{45}\] **Proof** According to the Hardy inequality (Minerbe, 2009), we have the same proofs by referring the details (Deruelle and Kroncke, 2021). \(\blacksquare\) **Theorem 32**: _Let \((\mathcal{M}^{n},\bar{g}_{0})\) be the LNE \(n\)-manifold which is linearly stable and integrable. Then, there exists a constant \(\alpha_{\bar{g}_{0}}\) satisfying_ \[\left(\Delta d(t)+\mathrm{Rm}(\bar{g}_{0})\ast d(t),d(t)\right)_{L^{2}}\leq- \alpha_{\bar{g}_{0}}\left\|\nabla^{\bar{g}_{0}}d(t)\right\|_{L^{2}}^{2} \tag{46}\] _for all \(\bar{g}(t)\in\tilde{\mathcal{F}}\) whose definition is given in Equation (24)._ **Proof** The similar proofs can be found in (Devyyer, 2014) with some minor modifications. Due to the linear stability requirement of LNE manifolds in Definition 29 and Definition 30, \(-L_{\bar{g}_{0}}\) is non-negative. Then there exists a positive constant \(\alpha_{\bar{g}_{0}}\) satisfying \[\alpha_{\bar{g}_{0}}\left(-\Delta d(t),d(t)\right)_{L^{2}}\leq\left(-\Delta d (t)-\mathrm{Rm}(\bar{g}_{0})\ast d(t),d(t)\right)_{L^{2}}.\] By Taylor expansion, we repeatedly use elliptic regularity and Sobolev embedding (Pacini, 2010) to obtain the estimate. \(\blacksquare\) **Corollary 33**: _Let \((\mathcal{M}^{n},\bar{g}_{0})\) be the LNE \(n\)-manifold which is integrable. For a Ricci-DeTurck flow \(\bar{g}(t)\) on a maximal time interval \(t\in[0,T]\), if it satisfies \(\|\bar{g}(t)-\bar{g}_{0}\|_{L^{\infty}}<\epsilon\) where \(\epsilon>0\), then there exists a constant \(C<\infty\) for \(t\in[0,T]\) such that the evolution inequality satisfies_ \[\|d(t)-d_{0}(t)\|_{L^{2}}^{2}\geq C\int_{0}^{T}\left\|\nabla^{\bar{g}_{0}(t)} \left(d(t)-d_{0}(t)\right)\right\|_{L^{2}}^{2}\mathrm{d}t. \tag{47}\] **Proof** Based on Equation (44), we know \[\frac{\partial}{\partial t}(d(t)-d_{0})= \Delta(d(t)-d_{0})+\mathrm{Rm}\ast(d(t)-d_{0})\] \[+F_{\bar{g}^{-1}}\ast\nabla^{\bar{g}_{0}}(d(t)-d_{0})\ast\nabla^{ \bar{g}_{0}}(d(t)-d_{0})\] \[+\nabla^{\bar{g}_{0}}\left(G_{\Gamma(\bar{g}_{0})}\ast(d(t)-d_{0} )\ast\nabla^{\bar{g}_{0}}(d(t)-d_{0})\right).\] Followed by Lemma 31 and Theorem 32, we further obtain \[\frac{\partial}{\partial t}\|d(t)-d_{0}\|_{L^{2}}^{2}= 2\left(\Delta(d(t)-d_{0})+\operatorname{Rm}\ast(d(t)-d_{0}),d(t)-d _{0}\right)_{L^{2}}\] \[+\left(F_{\bar{g}^{-1}}\ast\nabla^{\bar{g}_{0}}(d(t)-d_{0})\ast \nabla^{\bar{g}_{0}}(d(t)-d_{0}),d(t)-d_{0}\right)_{L^{2}}\] \[+\left(\nabla^{\bar{g}_{0}}\left(G_{\Gamma(\bar{g}_{0})}\ast(d(t) -d_{0})\ast\nabla^{\bar{g}_{0}}(d(t)-d_{0})\right),d(t)-d_{0}\right)_{L^{2}}\] \[+\left(d(t)-d_{0},\frac{\partial}{\partial t}d_{0}(t)\right)_{L^{ 2}}+\int_{\mathcal{M}}\left(d(t)-d_{0}\right)\ast(d(t)-d_{0})\ast\frac{ \partial}{\partial t}d_{0}(t)\mathrm{d}\mu\] \[\leq -2\alpha_{\bar{g}_{0}}\left\|\nabla^{\bar{g}_{0}}\left(d(t)-d_{0 }\right)\right\|_{L^{2}}^{2}\] \[+C\left\|(d(t)-d_{0})\right\|_{L^{\infty}}\left\|\nabla^{\bar{g} _{0}}\left(d(t)-d_{0}\right)\right\|_{L^{2}}^{2}\] \[+\left\|\frac{\partial}{\partial t}d_{0}(t)\right\|_{L^{2}}\left\| d(t)-d_{0}\right\|_{L^{2}}\] \[\leq \left(-2\alpha_{\bar{g}_{0}}+C\cdot\epsilon\right)\left\|\nabla^{ \bar{g}_{0}}\left(d(t)-d_{0}\right)\right\|_{L^{2}}^{2}.\] Let \(\epsilon\) be a small enough constant that \(-2\alpha_{\bar{g}_{0}}+C\cdot\epsilon<0\) holds, we can find \[\frac{\partial}{\partial t}\|d(t)-d_{0}\|_{L^{2}}^{2}\leq-C\left\|\nabla^{ \bar{g}_{0}}\left(d(t)-d_{0}\right)\right\|_{L^{2}}^{2}\] holds. ## Appendix E Proof for the Information Geometry ### Proof for Theorem 10 The LNE divergence can be defined between two nearby points \(\boldsymbol{\xi}\) and \(\boldsymbol{\xi}^{\prime}\), where the first derivative of the LNE divergence w.r.t. \(\boldsymbol{\xi}^{\prime}\) is: \[\partial_{\boldsymbol{\xi}^{\prime}}D_{LNE}[\boldsymbol{\xi}^{ \prime}:\boldsymbol{\xi}]\] \[=\sum_{i}\left[\partial_{\boldsymbol{\xi}^{\prime}}\frac{1}{\tau^ {2}}\log\cosh(\tau\xi_{i}^{\prime})-\partial_{\boldsymbol{\xi}^{\prime}}\frac {1}{\tau^{2}}\log\cosh(\tau\xi_{i})-\frac{1}{\tau}\partial_{\boldsymbol{\xi}^ {\prime}}(\xi_{i}^{\prime}-\xi_{i})\tanh(\tau\xi_{i})\right]\] \[=\sum_{i}\partial_{\boldsymbol{\xi}^{\prime}}\frac{1}{\tau^{2}} \log\cosh(\tau\xi_{i}^{\prime})-\frac{1}{\tau}\tanh(\tau\boldsymbol{\xi}).\] The second derivative of the LNE divergence w.r.t. \(\boldsymbol{\xi}^{\prime}\) is: \[\partial_{\boldsymbol{\xi}^{\prime}}^{2}D_{LNE}[\boldsymbol{\xi}^{\prime}: \boldsymbol{\xi}]=\sum_{i}\partial_{\boldsymbol{\xi}^{\prime}}^{2}\frac{1}{ \tau^{2}}\log\cosh(\tau\xi_{i}^{\prime}).\] We deduce the Taylor expansion of the LNE divergence at \(\mathbf{\xi}^{\prime}=\mathbf{\xi}\): \[D_{LNE}[\mathbf{\xi}^{\prime}:\mathbf{\xi}] \approx D_{LNE}[\mathbf{\xi}:\mathbf{\xi}]+\left(\sum_{i}\partial_{\mathbf{\xi} ^{\prime}}\frac{1}{\tau^{2}}\log\cosh(\tau\xi_{i}^{\prime})-\frac{1}{\tau} \tanh(\tau\mathbf{\xi})\right)^{\top}\bigg{|}_{\mathbf{\xi}^{\prime}=\mathbf{\xi}}d\mathbf{\xi}\] \[+\frac{1}{2}d\mathbf{\xi}^{\top}\left(\sum_{i}\partial_{\mathbf{\xi}^{ \prime}}^{2}\frac{1}{\tau^{2}}\log\cosh(\tau\xi_{i}^{\prime})\right)\bigg{|}_{ \mathbf{\xi}^{\prime}=\mathbf{\xi}}d\mathbf{\xi}\] \[=0+0+\frac{1}{2\tau^{2}}d\mathbf{\xi}^{\top}\partial\left[\frac{ \partial\cosh(\tau\mathbf{\xi})}{\cosh(\tau\mathbf{\xi})}\right]d\mathbf{\xi}\] \[=\frac{1}{2\tau^{2}}d\mathbf{\xi}^{\top}\frac{\partial^{2}\cosh(\tau \mathbf{\xi})\cosh(\tau\mathbf{\xi})-\partial\cosh(\tau\mathbf{\xi})\partial\cosh(\tau\mathbf{ \xi})^{\top}}{\cosh^{2}(\tau\mathbf{\xi})}d\mathbf{\xi}\] \[=\frac{1}{2\tau^{2}}d\mathbf{\xi}^{\top}\left(\frac{\partial^{2}\cosh (\tau\mathbf{\xi})}{\cosh(\tau\mathbf{\xi})}-\tau^{2}\left[\frac{\sinh(\tau\mathbf{\xi})}{ \cosh(\tau\mathbf{\xi})}\right]\left[\frac{\sinh(\tau\mathbf{\xi})}{\cosh(\tau\mathbf{\xi})} \right]^{\top}\right)d\mathbf{\xi}\] \[=\frac{1}{2}\sum_{i,j}\left\{\delta_{ij}-\left[\tanh(\tau\mathbf{\xi} )\tanh(\tau\mathbf{\xi})^{\top}\right]_{ij}d\xi_{i}d\xi_{j}\right\}.\] ### Proof for Lemma 11 We would like to know in which direction minimizes the loss function with the constraints of the LNE divergence, so that we do the minimization: \[d\mathbf{\xi}^{*}=\operatorname*{arg\,min}_{d\mathbf{\xi}\text{ s.t. }D_{LNE}[\mathbf{\xi}:\mathbf{\xi} +d\mathbf{\xi}]=c}L(\mathbf{\xi}+d\mathbf{\xi})\] where \(c\) is the constant. The loss function descends along the manifold with constant speed, regardless the curvature. Furthermore, we can write the minimization in Lagrangian form. Combined with Theorem 10, the LNE divergence can be approximated by its second order Taylor expansion. Approximating \(L(\mathbf{\xi}+d\mathbf{\xi})\) with it first order Taylor expansion, we get: \[d\mathbf{\xi}^{*} =\operatorname*{arg\,min}_{d\mathbf{\xi}}\,L(\mathbf{\xi}+d\mathbf{\xi})+ \lambda\left(D_{LNE}[\mathbf{\xi}:\mathbf{\xi}+d\mathbf{\xi}]-c\right)\] \[\approx\operatorname*{arg\,min}_{d\mathbf{\xi}}\,L(\mathbf{\xi})+ \partial_{\mathbf{\xi}}L(\mathbf{\xi})^{\top}d\mathbf{\xi}+\frac{\lambda}{2}d\mathbf{\xi}^{\top }g(\mathbf{\xi})d\mathbf{\xi}-c\lambda.\] To solve this minimization, we set its derivative w.r.t. \(d\mathbf{\xi}\) to zero: \[0 =\frac{\partial}{\partial d\mathbf{\xi}}L(\mathbf{\xi})+\partial_{\mathbf{\xi} }L(\mathbf{\xi})^{\top}d\mathbf{\xi}+\frac{\lambda}{2}d\mathbf{\xi}^{\top}\left[\delta- \tanh(\tau\mathbf{\xi})\tanh(\tau\mathbf{\xi})^{\top}\right]d\mathbf{\xi}-c\lambda\] \[=\partial_{\mathbf{\xi}}L(\mathbf{\xi})+\lambda\left[\delta-\tanh(\tau\bm {\xi})\tanh(\tau\mathbf{\xi})^{\top}\right]d\mathbf{\xi}\] \[d\mathbf{\xi} =-\frac{1}{\lambda}\left[\delta-\tanh(\tau\mathbf{\xi})\tanh(\tau\mathbf{ \xi})^{\top}\right]^{-1}\partial_{\mathbf{\xi}}L(\mathbf{\xi})\] where a constant factor \(1/\lambda\) can be absorbed into learning rate. Therefore, we get the optimal descent direction, i.e., the opposite direction of gradient, which takes into account the local curvature defined by \(\left[\delta-\tanh(\tau\mathbf{\xi})\tanh(\tau\mathbf{\xi})^{\top}\right]^{-1}\).
2306.13474
Efficient Online Processing with Deep Neural Networks
The capabilities and adoption of deep neural networks (DNNs) grow at an exhilarating pace: Vision models accurately classify human actions in videos and identify cancerous tissue in medical scans as precisely than human experts; large language models answer wide-ranging questions, generate code, and write prose, becoming the topic of everyday dinner-table conversations. Even though their uses are exhilarating, the continually increasing model sizes and computational complexities have a dark side. The economic cost and negative environmental externalities of training and serving models is in evident disharmony with financial viability and climate action goals. Instead of pursuing yet another increase in predictive performance, this dissertation is dedicated to the improvement of neural network efficiency. Specifically, a core contribution addresses the efficiency aspects during online inference. Here, the concept of Continual Inference Networks (CINs) is proposed and explored across four publications. CINs extend prior state-of-the-art methods developed for offline processing of spatio-temporal data and reuse their pre-trained weights, improving their online processing efficiency by an order of magnitude. These advances are attained through a bottom-up computational reorganization and judicious architectural modifications. The benefit to online inference is demonstrated by reformulating several widely used network architectures into CINs, including 3D CNNs, ST-GCNs, and Transformer Encoders. An orthogonal contribution tackles the concurrent adaptation and computational acceleration of a large source model into multiple lightweight derived models. Drawing on fusible adapter networks and structured pruning, Structured Pruning Adapters achieve superior predictive accuracy under aggressive pruning using significantly fewer learned weights compared to fine-tuning with pruning.
Lukas Hedegaard
2023-06-23T12:29:44Z
http://arxiv.org/abs/2306.13474v1
# Efficient Online Processing with Deep Neural Networks ###### Abstract We propose a new approach to the efficient Online Processing with Deep Neural Networks. The proposed Online Processing with Deep Neural Networks is a new approach to the efficient Online Processing with Deep Neural Networks. The proposed Online Processing with Deep Neural Networks is a new approach to the efficient Online Processing with Deep Neural Networks.
2305.03323
Nonparametric model for the equations of state of neutron star from deep neural network
It is of great interest to understand the equation of state (EOS) of the neutron star (NS), whose core includes highly dense matter. However, there are large uncertainties in the theoretical predictions for the EOS of NS. It is useful to develop a new framework, which is flexible enough to consider the systematic error in theoretical predictions and to use them as a best guess at the same time. We employ a deep neural network to perform a non-parametric fit of the EOS of NS using currently available data. In this framework, the Gaussian process is applied to represent the EOSs and the training set data required to close physical solutions. Our model is constructed under the assumption that the true EOS of NS is a perturbation of the relativistic mean-field model prediction. We fit the EOSs of NS using two different example datasets, which can satisfy the latest constraints from the massive neutron stars, NICER, and the gravitational wave of the binary neutron stars. Given our assumptions, we find that a maximum neutron star mass is $2.38^{+0.15}_{-0.13} M_\odot$ or $2.41^{+0.15}_{-0.14}$ at $95\%$ confidence level from two different example datasets. It implies that the $1.4 M_\odot$ radius is $12.31^{+0.29}_{-0.31}$ km or $12.30^{+0.35}_{-0.37}$ km. These results are consistent with results from previous studies using similar priors. It has demonstrated the recovery of the EOS of NS using a nonparametric model.
Wenjie Zhou, Jinniu Hu, Ying Zhang, Hong Shen
2023-05-05T07:01:14Z
http://arxiv.org/abs/2305.03323v1
# Nonparametric model for the equations of state of neutron star from deep neural network ###### Abstract It is of great interest to understand the equation of state (EOS) of the neutron star (NS), whose core includes highly dense matter. However, there are large uncertainties in the theoretical predictions for the EOS of NS. It is useful to develop a new framework, which is flexible enough to consider the systematic error in theoretical predictions and to use them as a best guess at the same time. We employ a deep neural network to perform a non-parametric fit of the EOS of NS using currently available data. In this framework, the Gaussian process is applied to represent the EOSs and the training set data required to close physical solutions. Our model is constructed under the assumption that the true EOS of NS is a perturbation of the relativistic mean-field model prediction. We fit the EOSs of NS using two different example datasets, which can satisfy the latest constraints from the massive neutron stars, NICER, and the gravitational wave of the binary neutron stars. Given our assumptions, we find that a maximum neutron star mass is \(2.38^{+0.15}_{-0.13}M_{\odot}\) or \(2.41^{+0.15}_{-0.14}\) at 95% confidence level from two different example datasets. It implies that the \(1.4M_{\odot}\) radius is \(12.31^{+0.29}_{-0.31}\) km or \(12.30^{+0.35}_{-0.37}\) km. These results are consistent with results from previous studies using similar priors. It has demonstrated the recovery of the EOS of NS using a nonparametric model. Neutron Star, Deep neural network, Gaussian process regression ## 1 Introduction Neutron stars, remnants of very massive stars at the end of their lifecycle, are one of the most compact objects in the universe, attracting a lot of attention within the fields of astrophysics and nuclear physics (Oertel et al., 2017). Rapid developments in space observation technologies and gravitation-wave detection have proven advantageous to the measurement of neutron star properties, such as mass, radius, and tidal deformability. Three massive neutron stars with masses of around \(2M_{\odot}\), PSR J1614-2230 (Demorest et al., 2010; Fonseca et al., 2016; Arzoumanian et al., 2018), PSR J0348+0432 (Antoniadis et al., 2013), and PSR J0740+6620 (Cromartie et al., 2020) have been discovered in the past decade. The gravitational wave from a binary neutron star merger, the GW170817 event, was first detected by the LIGO and Virgo collaborations in 2017, providing a constraint on the tidal deformability of a neutron star at \(1.4M_{\odot}\)(Abbott et al., 2017, 2018, 2019). Furthermore, simultaneous measurements of the mass and radius of PSR J0030+0451 and PSR J0740+6620 were recently analyzed by the Neutron Star Interior Composition Explorer (NICER) (Riley et al., 2019; Miller et al., 2019; Riley et al., 2021; Miller et al., 2021). These studies have improved our knowledge of neutron stars, providing insight into their interior structure and components. A neutron star can be divided into the atmosphere, outer crust, inner crust, outer core, and inner core regions. Its properties are strongly dependent on the equation of state (EOS) of dense nuclear matter (Lattimer & Prakash, 2000; Glendenning, 2001; Weber, 2005; Lattimer & Prakash, 2007; Baym et al., 2018). In the core region, the density approaches 5-10 times the nuclear saturation density. Therefore, in this high-density region, the EOS plays an essential role in investigations, yet it cannot be well determined by current terrestrial methodologies. Conventionally, the EOS of neutron stars can be extrapolated by the nuclear many-body approaches, such as the density functional theory (Ring, 1996; Bender et al., 2003; Meng et al., 2006; Stone & Reinhard, 2007; Niksic et al., 2011; Dutra et al., 2012, 2014) and _ab initio_ method (Akmal et al., 1998; Van Dalen et al., 2004; Sammarruca, 2010; Sammarruca et al., 2012; Wei et al., 2019; Wang et al., 2020), which can describe the ground-state properties of finite nuclei and nuclear saturation properties very well. However, there are large uncertainties, when these methods are extended to calculate the high-density EOS. They generate many kinds of neutron star mass-radius relations. Furthermore, the isospin dependence of EOS, i.e., the symmetry energy effect, is strongly correlated to the radii of low-mass neutron stars (Li et al., 2008; Bao et al., 2014; Bao and Shen, 2015; Sun, 2016; Ji et al., 2019; Li et al., 2019; Hu et al., 2020). With present observations of neutron stars, a smaller slope of symmetry energy, \(L\) is preferred. Meanwhile, several exotic hadronic degrees of freedom and/or hadron-quark transitions may appear in the core region of a neutron star because of the phase diagram of strong interaction (Yang and Shen, 2008; Xu et al., 2010; Chen et al., 2013; Orsaria et al., 2014; Wu and Shen, 2017; Ju et al., 2021; Huang et al., 2022). Hence, it is very difficult to generate a unified EOS in a self-consistent theoretical framework. Recently, the data-driven methodologies, such as Bayesian inference (Ozel et al., 2010; Raithel et al., 2017; Steiner et al., 2010; Alvarez-Castillo et al., 2016; Miao et al., 2021), deep neural network (DNN) (Fujimoto et al., 2018, 2020, 2021; Farrell et al., 2022; Ferreira et al., 2022), nonparametric EOS representation (Landry and Essick, 2019; Essick et al., 2020, 2020), support vector machines (Murarka et al., 2022; Ferreira and Providencia, 2021), and so on, have been introduced to generate the possible EOSs using the latest observables of neutron stars. In Bayesian inference, the EOS is parameterized and the corresponding parameters are obtained with a marginal likelihood estimation on the posterior probability in terms of model parameters (Ozel et al., 2010). Fujimoto et al. proposed a scheme that can map the finite observation data of neutron stars onto the EOS with a feed-forward DNN. They present the EOS as a polytropic function with different speeds of sound at distinct density segments (Fujimoto et al., 2018). To avoid the limitations of a parametric EOS, Landry _et al._ developed a nonparametric method to generate the EOS from the observables of gravitation waves by combining the Gaussian process and Bayesian inference methods, where the EOS of the neutron star is represented by the Gaussian process with finite points (Landry and Essick, 2019). The matching between the EOS and neutron star observations was carried out by the Bayesian inference. Han et al. also reconstructed the EOS of a neutron star using another Bayesian nonparametric inference method where the EOS was produced by the neural network with a sigmoid type as the activation function (Han et al., 2021). In this work, a new machine learning methodology is proposed to reconstruct a nonparametric model for the EOSs of neutron stars based on the scheme proposed by Fujimoto et al., where the complete EOS is generated by the Gaussian process regression method with finite data points about the pressure-energy relation. A DNN is trained with the constraints of neutron star mass-radius relations, the masses of the heavy neutron stars, and the measurements of NICER. In Section 2, the framework of the Gaussian process regression method and the construction of the DNN is given in detail. The nonparametric EOS model of neutron stars generated by the DNN is shown in Section 3. A summary is presented in Section 4. ## 2 Gaussian Process Regression and Neural Network ### Gaussian Process Regression The Gaussian process (GP) (Huang et al., 2022; Williams and Rasmussen, 2006), a random process, is a series of normal distributions of random variables in an index set combination. If the set of random variables \(\{f(x):x\in\chi\}\) is taken from the GP with the mean function \(m(x)\) and the covariance function \(k(x_{1},x_{2})\), the corresponding random variables \(f(x_{i})\) satisfy the multivariate Gaussian distribution for any finite set, \([x_{1},\cdots,x_{m}]\in\chi\), \[\left[\begin{array}{c}f(x_{1})\\ \vdots\\ f(x_{m})\end{array}\right]\sim\ \ \mathcal{N}\left(\left[\begin{array}{c}m(x_{1})\\ \vdots\\ m(x_{m})\end{array}\right],\left[\begin{array}{ccc}k(x_{1},x_{1})&\cdots&k (x_{1},x_{m})\\ \vdots&\ddots&\vdots\\ k(x_{m},x_{1})&\cdots&k(x_{m},x_{m})\end{array}\right]\right), \tag{1}\] which can be simply expressed as \[f(\cdot)\sim GP(m(\cdot),k(\cdot,\cdot)). \tag{2}\] All linear combinations of random variables in GP obey the normal distribution. For each finite-dimensional set, its probability density function on the continuous exponential set is the Gaussian measurement of all random variables. Therefore, it is regarded that the infinite-dimensional set can be generalized by the extension of the multivariate Gaussian distribution. Hence, the GP can be applied to solve a normal regression problem, \[y^{(i)}=f(x^{(i)})+\epsilon^{(i)}, \tag{3}\] where \(X\) is defined as the training set and its components \((x^{(1)},...,x^{(m)})\) are independently and identically distributed with unknown distribution. \(\epsilon^{(i)}\) is an independent noise variable, which is also given by a normal distribution with variance \(\sigma^{2}\), \(N(0,\sigma^{2})\). This scheme is called the Gaussian process regression (GPR) method. Usually, it is assumed that \(f\) follows the GP with a mean value of zero for notation simplicity, \[f(\cdot)\sim GP(0,k(\cdot,\cdot)). \tag{4}\] The test set \(X^{*}=(x^{(1*)},...,x^{(m*)})\), has the same independent co-distribution as \(X\), marked as \(X\to X^{*}\). Therefore, the posterior distribution \(p(y^{*}|X,\;X^{*})\) is predicted in GPR as the Gaussian distribution of the results, which is different from the general linear regression. According to the properties of the GP, a joint distribution of the training and test sets is obtained, \[\left[\begin{array}{c}\vec{f}\\ \vec{f}^{*}\end{array}\right]\Bigg{|}\,X,X^{*}\sim\mathcal{N}\left(\vec{0}, \left[\begin{array}{cc}K(X,X)&K(X,X^{*})\\ K(X^{*},X)&K(X^{*},X^{*})\end{array}\right]\right) \tag{5}\] where the matrix elements \(K(X^{A},X^{B})_{i,j}=k(x_{i}^{A},x_{j}^{B})\). In the GP, the covariance function \(k_{ij}\) is also called the kernel function. The standard choice is the squared-exponential kernel, \[k_{se}(x_{1},x_{2})=\sigma^{2}\exp\left(-\frac{||x_{1}-x_{2}||^{2}}{2l^{2}} \right). \tag{6}\] Meanwhile, their noises obey similar distributions, \[\left[\begin{array}{c}\vec{\epsilon}\\ \vec{\epsilon^{*}}\end{array}\right]\sim\mathcal{N}\left(\vec{0},\left[ \begin{array}{cc}\sigma_{wn}^{2}I&\vec{0}\\ \vec{0}^{T}&\sigma_{wn}^{2}I\end{array}\right]\right). \tag{7}\] Here, \(\sigma_{wn}^{2}\) is the hyper-parameter corresponding to white noise, which is different from the signal variance parameter, \(\sigma\) in \(k_{se}\). The summation of two independent multivariate Gaussian variables is still a multivariate Gaussian variable, \[\left[\begin{array}{c}\vec{y}\\ \vec{y^{*}}\end{array}\right]\Bigg{|}\,X,X^{*}=\left[\begin{array}{cc}\vec{ \epsilon}\\ \vec{\epsilon^{*}}\end{array}\right]+\left[\begin{array}{c}\vec{\epsilon} \\ \vec{\epsilon^{*}}\end{array}\right]\sim \tag{8}\] \[\mathcal{N}\left(\vec{0},\left[\begin{array}{cc}K(X,X)+\sigma_ {wn}^{2}I&K(X,X^{*})\\ K(X^{*},X)&K(X^{*},X^{*})+\sigma_{wn}^{2}I\end{array}\right]\right)\] Based on the properties of multivariate Gaussian distribution, the conditional distribution over the unknown \(y^{*}\) is, \[y^{*}|y,\;X,\;X^{*}\sim\mathcal{N}\left(\mu^{*},\Sigma^{*}\right), \tag{9}\] where, \[\mu^{*} = K(X^{*},X)(K(X,X)+\sigma_{wn}^{2}I)^{-1}\vec{y},\] \[\Sigma^{*} = K(X^{*},X^{*})-K(X^{*},X)\] \[(K(X,X)+\sigma_{wn}^{2}I)^{-1}K(X,X^{*}).\] \(\mu^{*}\) and \(\Sigma^{*}\) are the mean and covariance functions of the probability distribution for our prediction results, respectively. Therefore, given the hyper-parameters \(\sigma\) and \(l\) in the kernel function, a probability distribution describing the whole test set by the GPR method can be obtained. In principle, the mean function should be selected as the "actual data curve". However, it is strongly dependent on the hyper-parameters, \(\sigma\) and \(l\) that are determined by maximizing the marginal log-likelihood, defined as, \[\log p(\mathbf{y}|\sigma,l) = \log\mathcal{N}(0,K_{yy}(\sigma,l))\] \[= -\frac{1}{2}\mathbf{y}^{T}K_{yy}^{-1}\mathbf{y}-\frac{1}{2}\log|K_{yy}|- \frac{N}{2}\log(2\pi),\] where \(K_{yy}=K(X^{*},X^{*})\). Therefore, with a small number of data points, a relatively reasonable EOS curve and its confidence range can be predicted in the framework of the GPR method. The direct matching between the EOS of a neutron star, i.e., the pressure-energy relation, and the observables of a neutron star may generate nonphysical solutions, such as the speed of sound of neutron star matter being less than zero or larger than the speed of light, \(c_{s}<0\) or \(c_{s}>c\), or the energy density becoming less than zero in some extreme conditions. Recently, a new intermediate variable \(\phi\) was proposed to construct the EOS of a neutron star (Lindblom, 2010; Landry & Essick, 2019). \(\phi\) is defined as, \[\phi=\textbf{log}\left(c^{2}\frac{d\epsilon}{dp}-1\right). \tag{12}\] It avoids the aforementioned weird behaviors, as when \(\phi\in\textbf{R}\), the speed of sound obeys \(0\leq c_{s}^{2}=dp/d\epsilon\leq c^{2}\), which automatically satisfies the physical requirements. When \(p>0\), the \(\epsilon>0\) can be kept. Due to the large pressure magnitude of pressure, the \(\phi\) is regarded as a function of \(\log p\) so that it is easier to determine the hyper-parameters. Therefore, Eq. (12) will be expressed as, \[\phi=\textbf{log}\left(\partial\textbf{log}\epsilon\frac{e^{\textbf{log} \epsilon}}{p}c^{2}-1\right), \tag{13}\] where \(\partial\textbf{log}\epsilon=\left.\frac{\partial\log\epsilon}{\partial\log p }\right|_{p=p_{i}}\). In the training set, \(n\) data points (\(\phi_{i},\log p_{i}\)) are randomly chosen. Once the optimal hyper-parameters are obtained by GPR, the continuum \(\phi-\log p\) curve can be generated. The corresponding EOS of the neutron star, \(\epsilon(p)\) is provided by numerically integrating \[\frac{\partial\epsilon}{\partial p}=\frac{1+e^{\phi}}{c^{2}}. \tag{14}\] ### DNN method In the available investigations of the structure of neutron stars, the EOS of neutron star matter was first calculated by either the nuclear many-body method or the parameterization function under the conditions of \(\beta\)-equilibrium and charge neutrality. The EOS was then input to the Tolman-Oppenheimer-Volkoff (TOV) equation (Tolman, 1939; Oppenheimer and Volkoff, 1939), which describes a spherically symmetric and isotropic star in a static gravitational field with general relativity. \[\begin{split}\frac{dp}{dr}=&-\frac{G\epsilon(r)m(r )}{c^{2}r^{2}}\left[1+\frac{p(r)}{\epsilon(r)}\right]\\ &\times\left[1+\frac{4\pi r^{3}p(r)}{m(r)c^{2}}\right]\left[1- \frac{2Gm(r)}{c^{2}r}\right]^{-1}\\ \frac{dm}{dr}=&\frac{4\pi r^{2}\epsilon(r)}{c^{2}}, \end{split} \tag{15}\] where \(r\) is the radial coordinate, representing the distance to the center of the star. The functions \(p(r)\) and \(\epsilon(r)\) are pressure and energy density (i.e., mass density), respectively. We can easily integrate these differential equations starting at \(r=0\), with the initial condition \(p(r=0)=p_{c}\). When it is integrated into the surface of the neutron star, i.e., the radius \(R\) and \(p(r=R)=0\), then \(M=m(R)\) corresponds to the total mass of the neutron star. Therefore, a continuum mass-radius (\(M\)-\(R\)) relation of a neutron star can be generated by the TOV equation. A functional mapping between the EOS space and \(M\)-\(R\) space is constructed through the above framework, in a process called "TOV mapping". In principle, such mapping is invertible; thus, there should be a relevant inverse mapping (Lindblom, 1992), where the EOS can be uniquely reconstructed from the observed \(M\)-\(R\) relationship of the neutron star. However, in actuality, the complete \(M\)-\(R\) curve cannot be directly obtained from the observed data due to the discontinuities and uncertainties inherent in neutron star observations (Fujimoto et al., 2021). Therefore, a more likely EOS can be inferred from the neutron star observations with uncertainties. The DNN is a powerful machine learning method to connect the EOS with observed data, following the idea of Fujimoto et al. (Fujimoto et al., 2021). The neural network (NN) is a representation of the fitting parameters of a function. Deep learning, e.g., the machine learning method using a DNN, is a process of optimizing the parameters contained in the function represented by an NN. Deep learning can be divided into supervised learning and unsupervised learning. The supervised learning that we adopted needs to have specific inputs and outputs before it can complete the fitting process with the training data (i.e., regression). Compared with general fitting methods, the advantage of deep learning lies in the generalization properties of NNs. It does not need to rely on any prior knowledge about the proper form of the fitting function. Due to a large number of neurons (and neuron layers) and fitting parameters, an NN with a sufficient number of neurons can generate any continuous function (Cybenko, 1989; Hornik, 1991). The model function of a feed-forward NN can be expressed as, \[\begin{split}\mathbf{y}=f(\mathbf{x}|&\{W^{(1)},b^{(1)}, \cdots,W^{(l)},b^{(l)},\cdots,\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad of \(\mathcal{B}\), and the approximate derivative is, \[\frac{\partial\mathcal{L}(W^{(l)})}{\partial W^{(l)}}\approx\frac{1}{|\mathcal{B }|}\sum_{n=1}^{|\mathcal{B}|}\frac{\partial\ell\left(\boldsymbol{y}_{n},f\left( \boldsymbol{x}_{n}|\left\{W^{(l)},b^{(l)}\right\}_{l}\right)\right)}{\partial W ^{(l)}}, \tag{18}\] where, the batch size \(|\mathcal{B}|\) represents the number of sample points in \(\mathcal{B}\). Since each optimal choice varies from case to case, its error will be shown later as a part of our estimations on the EOS confidence. The epoch denotes the number of scans of the entire training data set \(\mathcal{D}\). Parameters are updated with each small batch, so an epoch is equivalent to iterating \(|\mathcal{D}|/|\mathcal{B}|\) small batches of data until all iterations are completed. In addition, the derivative \(\frac{\partial\ell}{\partial W}\) that appeared in Eq. (18) was calculated by the back-propagation method. In this training process, the mean square logarithmic error (msle) is regarded as the loss \(\ell(\boldsymbol{y},\boldsymbol{y}^{\prime})\) in Eq. (17), \[\ell_{\text{msle}}(\boldsymbol{y},\boldsymbol{y}^{\prime})\equiv|\log \boldsymbol{y}-\log\boldsymbol{y}^{\prime}|^{2}. \tag{19}\] With a loss function, our NN can begin the basic training. The parameter initialization of NN will be discussed later, in detail. Therefore, it is useful to compare our method with other methods proposed to generate the EOS of neutron star. In the present framework, the fitted EOSs are obtained by the DNN. The neutron star observation data is chosen as the input layer, while the constraint EOS is set up as the output layer. The training process is finished with the observations' likelihoods and the EOS priors generated by the theoretical model. The EOSs in the priors and the output layer are presented by several discretized points in \(\phi\)-function to satisfy the constraint of the speed of sound and are smoothly connected by GP. On the other hand, the EOSs in the work of Fujimoto et al. were parameterized as a polytrope function dependent on the speeds of the sound of neutron star matter. Furthermore, the fitted EOSs in the work of Landry and Essick were produced by Bayesian inference with a set of nuclear-theoretic models. ## 3 The Numerical Details and Results To prepare the training data set, the EOSs from relativistic mean-field (RMF) models were used to obtain the generation interval of GPR fitting data points. Nine RMF parameterizations were selected: BigApple, DD2, DDLZ1, DDME1, DDME2, DDMEX, NL3, PKDD, and TW99 (Fattoyev et al., 2020; Typel et al., 2010; Wei et al., 2020; Niksic et al., 2002; Lalazissis et al., 2005; Taninah et al., 2020; Lalazissis et al., 1997; Long et al., 2004). All of these RMF parameter sets can provide neutron stars, whose maximum masses are larger than \(2.0M_{\odot}\)(Huang et al., 2020). The EOS from the NL3 set generated a maximum mass of neutrons star around \(2.78M_{\odot}\). The \(\epsilon\)-\(p\) relation in the EOS was transferred into the \(\phi\)-\(\ln p\) function, where \(\ln p\) is the natural logarithm of pressure. After calculating the means and variances of the \(\phi\)-\(\ln p\) relations from the above nine EOSs, it was found that their mean value is very close to the EOS from the DDME1 set (Niksic et al., 2002). To investigate the stability of initial values in the present framework, two schemes were adopted to generate the fitting interval with the GPR method: 1. _Scheme 1_ - After obtaining the mean and variance of \(\phi\)-\(\ln p\) functions from nine RMF parameter sets, the 95% confidence interval of the variance was selected as the generation range of \(\phi_{i}\). As shown in panel (a) of Fig. 2, this interval encloses all EOSs from the RMF model. 2. _Scheme 2_ - The \(\phi\)-\(\ln p\) function provided by DDME1 set was regarded as the standard, and \(\phi\pm 0.3\phi\) are chosen as the upper and lower bounds of the generation range of \(\phi_{i}\). Such an interval is consistent with the one obtained by scheme 1, to a large extent. In Fig. 3, the corresponding \(\epsilon\)-\(p\) relations of scheme 1 and 2 are compared to the model-informed and model-agnostic priors in the Bayesian inference method by Landry and Essick (Landry and Essick, 2019). The \(\epsilon\)-\(p\) relations from scheme 1 and scheme 2 in the present work are almost identical, which are also consistent with the model-informed prior. Since all of them are more strictly constrained by the theoretical EOSs. On the contrary, Figure 1: The NN flow chart of present framework. the model-agnostic prior has a loose boundary. It may consider more range of plausible EOSs. To produce an EOS of neutron stars (including the high-density region) with the GPR method and aforementioned schemes, seven pressure points \(\ln p_{i}\) (\(i=1,~{}2,~{}\cdots,~{}7\)) were selected, with the same interval, in the range \(\ln p\in[1,7]\). \(\phi_{i}\) was randomly generated in the training interval at each \(\ln p_{i}\) point as an initial data set (\(\phi_{i},~{}\ln p_{i}\)). The EOS below nuclear saturation density was chosen as the one from the SLy4 set. A smooth and continuous \(\phi(\ln p)\) function is fitted by the GPR method, where the hyper-parameters, \(l\) and \(\sigma\) are obtained by maximizing the marginal log-likelihood, as shown in Eq. (11). Furthermore, the star point, \(\phi_{1}=\phi(\ln p=1)\) was fixed as the magnitude from the DDME1 parameter set. The \(M\)-\(R\) relation of a neutron star can be calculated using the EOS from the GPR method by solving the TOV equation. In the present framework, the training data set of the DNN should assemble the points on the \(M\)-\(R\) curve, which correspond to the observables. The method proposed by Fujimoto et al. (Fujimoto et al., 2021) is used in this work to generate training data. Firstly, the maximum masses of neutron stars less than \(2.2M_{\odot}\) and the \(M\)-\(R\) relations that did not satisfy the radii constraints of PSR J0740+6620 and PSR J0030+0451 (Miller et al., 2019, 2021) were excluded from the training data. Then, 14 points in the mass regions, \([M_{\odot},M_{max}]\) on the \(M\)-\(R\) curve were randomly chosen as "the original data points" \((M_{i},R_{i})\) to simulate the real observations of the 14 available neutron stars. To consider the errors in the observations, the variances of the Gaussian distributions about the mass and radius, \(\sigma_{M_{i}}\) and \(\sigma_{R_{i}}\), were randomly taken from the uniform distribution in the ranges, \([0,M_{\odot}]\) and \([0,5\mathrm{km}]\). The deviations of mass and radius \((\Delta M_{i},\Delta R_{i})\) were calculated by the Gaussian distribution with the variances of \(\sigma_{M_{i}}\) and \(\sigma_{R_{i}}\). Finally the "real data point" \((M_{i}+\Delta M_{i},R_{i}+\Delta R_{i})\) was obtained. The set \((M_{i}+\Delta M_{i},~{}R_{i}+\Delta R_{i},~{}\sigma_{M_{i}},~{}\sigma_{R_{i}})\) can be compared to the observational data of neutron stars. A group of \(i=14\) data points \((M_{i},~{}R_{i})\) was selected from the \(M\)-\(R\) curve generated by each EOS, and \(j=100\) groups of different variances \((\sigma_{M_{ij}},\sigma_{R_{ij}})\) were randomly sampled for each \(M_{i}\)-\(R_{i}\) data point. Later, \(k=100\) groups of deviations, \(\Delta M_{ijk}\) and \(\Delta R_{ijk}\) were provided by each variance set, \((\sigma_{M_{ij}},\sigma_{R_{ij}})\). In this way, \(100\times 100\) sets of data for each EOS were prepared and 14 data points were sampled. The above process was repeated by 500 times to include as wide a range as possible, resulting in \(500\times 100\times 100=5,000,000\) sets, where one set includes 14 data points. Figure 3: The corresponding \(\epsilon-p\) relations of scheme 1 and 2 in Figure 2 and the model-informed and model-agnostic priors in the Bayesian inference method by Landry and Essick (Landry and Essick, 2019). Figure 2: The generation range of \(\phi\)-\(\ln p\). We will randomly select points within this range and then use GPR method to generate EOS. In panel (a ) the nine EOSs are treated to obtain the mean \(\mu\) and variance \(\sigma\), whose 95% confidence interval is taken to obtain the fitting range. In panel (b), the generation range is based on DDME1 curve, with a fluctuation of 0.3. For the architecture of the NN, the Python library, Keras (Chollet et al., 2015) was employed, with TensorFlow (Abadi et al., 2016) as the backend. The number of NN layers, their corresponding neurons, and the activation functions are shown in Table 1. The hyperbolic tangent function of the output layer makes the results fall between \((-1,1)\), speeding up the training. The msle is chosen as the loss function, given in Eq. (19). The optimization method was Adam (Kingma and Ba, 2014) by taking the batch size as \(1000\). The default initialization NN argument was the Glorot Uniform distribution (Glorot and Bengio, 2010). The DNN models for a full training set of \(5,000,000\) data were compare with a random sampling of \(1,000,000\) data in the training set, giving similar results, but with the latter greatly improving the training efficiency. In addition, for all models, the changes in loss functions for the training of epoch were almost identical. The loss functions estimated for the validation data and training data are shown as an example in Fig. 4. When the epoch \(>10\), the verification loss is consistent with the training loss, whereas when the epoch \(>100\), the verification loss is stable. Therefore, each DNN model was trained with \(1,000,000\) data. The validation set was taken as the \(10,000\) sets from the rest \(4,000,000\) sets to check the convergence. Once the epoch \(=100\), the model was considered finished. Due to the differences in initial input and training data, there was some uncertainty about the output results of the DNN. Therefore, the process was repeated \(100\) times to generate \(100\) independent DNN models. The uncertainties in the training results were estimated from the fitted \(100\) EOSs. In Fig. 5, \(200\) relations about \(\phi\)-\(\ln p\) from scheme 1 in panel (a) and scheme 2 in panel (b) are reconstructed through the training data of the DNN. Each curve is smoothly connected with seven output points by the GPR method, as shown in the inserts. It was found that most of these curves have similar pressure-dependence behaviors. Their differences increase in the high-density region due to the observation discrepancies associated with the \(14\) neutron stars. The \(\phi\)-\(\ln p\) relations must be converted to the \(\epsilon\)-\(p\) function by integrating the Eq. (14) to obtain the EOS of the neutron star. In Fig. 6, the neutron star EOSs with the \(68\%\) and \(95\%\) confidence levels from the DNN with scheme 1 in panel (a) and scheme 2 in panel (b) are shown and compared to those joint constraints from the GW170817 and GW190814 events (Abbott et al., 2020) and the EOS from DDME1. In the inserts, the original \(200\) EOSs from the DNN training are plotted. To analyze the uncertainties of the EOSs, it was assumed that the pressures at each energy density from the machine learning model satisfy the Gaussian distribution. Therefore, the mean EOS was obtained as the dashed curve with the dark blue shadow representing the \(68\%\) confidence level and the light blue shadow, the \(95\%\), respectively. In the low-density region, our estimations are consistent with the joint constraints on the EOS from the GW170817 and GW190814 events. With density increasing, present EOSs are softer than the joint constraints, since the maximum masses of the \(14\) neutron stars are just around \(2M_{\odot}\). Furthermore, the fitted EOS differs slightly from the EOS of DDME1 in scheme 2, despite this being regarded as the mean value of the training data. In the mediate region of energy density, the EOS generated by the DDME1 is harder than the fitted one, since the radius of the neutron star from DDME1 \begin{table} \begin{tabular}{c|c|c} \hline \hline Layer & Number of neurons & Activation function \\ \hline \(1\)(Input) & \(56\) & N/A \\ \hline \(2\) & \(60\) & ReLU \\ \hline \(3\) & \(40\) & ReLU \\ \hline \(4\) & \(40\) & ReLU \\ \hline \(5\)(Output) & \(6\) & tanh \\ \hline \hline \end{tabular} \end{table} Table 1: The setup of present DNN. The number of input and output neurons can be modified according to different network conditions. Here, the number of neurons at output layer is \(6\), because \(\phi(\ln p=1)\) has been fixed as the value obtained from DDME1 set. Figure 4: The Loss probabilities as functions of epoch with the training data and validation data. is a little larger when compared with the observations of the 14 neutron stars, as shown later. These results demonstrate that the EOS of the present framework is independent of the initial input of the training set. Here, it must be emphasized that the inconsistencies in EOSs fitted by LIGO-Virgo-KAGRA (LVK) collaborations from GW170817 and GW190814 events, and present work are generated by the different theoretical frameworks and priors. In the LVK analysis, the EOSs in the priors were given by the spectral representation and are determined by the adiabatic index \(\Gamma\) as shown in Refs. (Read et al., 2009) and (Lindblom, 2010). The EOS parameters of the prior ranges in LVK were choices from the 34-neutron star matter EOSs, including the PAL6, APR1-4, WFF1-3, MS1-2, and so on (Read et al., 2009). The maxim masses of the neutron star from these EOSs are in the range of \(1.47\sim 2.78M_{\odot}\) and the radii at \(1.4M_{\odot}\) are \(9.36\sim 15.47\) km. Correspondingly, the prior of EOSs space in the present framework is taken from the 9 RMF parameter sets, which only can generate the maximum masses of the neutron stars from \(2.0\sim 2.4M_{\odot}\). Therefore, the harder EOSs were fitted by LVK at high-density regions. Once the EOS of the neutron star were determined, its \(M\)-\(R\) relation was obtained by solving the TOV equation. The \(M\)-\(R\) relations from our deduced EOSs are plotted in Fig. 7, with 68% (dark blue) and 95% (light blue) confidence levels. The corresponding \(M\)-\(R\) distributions of the observed 14 neutron stars are given as contour plots. The masses of massive neutron stars, PSR J0348+0432, PSR J0740+6620, and PSR J1614-2230; the secondary compact object of the GW190814 event; and the radii of PSR J0030+0451 and PSR J0740+6620 from the NICER are given and compared. The fitted EOSs from schemes 1 and 2 nicely reproduce the neutron star observations and are able to generate massive neutron stars. Their radii are consistent with the results of the 14 observed neutron stars and the mass-radius simultaneous measurements from NICER. Furthermore, the \(M\)-\(R\) relation from the DDME1 set is shown as a solid line, which was chosen as the mean value to generate the training data set in scheme 2. Its Figure 5: The 200 DNN models about \(\phi\)-\(\ln p\) from schemes 1 and 2. Figure 6: The EOSs from the nonparametric machine learning methods with scheme 1 and 2 and comparing to those from the joint constraints from GW170817 and GW190814 events, and from the DDME1 set. radius at the mediate mass region is a little larger when compared with the 14 observed neutron stars. The output EOSs of the DNN from scheme 1 provide smaller radii, which coincide with the distribution of observables. This shows that the final results of present framework is independent of the generating scheme for the training data. In a binary neutron star merger, one neutron star will be deformed by the external gravitational field of another star. The magnitude of deformation is denoted as the tidal deformability, which is dependent on the EOS of the neutron star and can be extracted from the gravitational wave provided by the binary neutron star. In the GW170817 event, the dimensionless tidal deformability at \(1.4M_{\odot}\) was inferred as \(\Lambda_{1.4}=190^{+390}_{-120}\)(Abbott et al., 2018). In Fig. 8, the dimensionless tidal deformabilities as functions of neutron star masses from schemes 1 and 2, with 68% and 95% confidence levels, are plotted and compared to the constraint from the GW170817 event and the results from the DDME1 set. The \(\Lambda\) decreases with the neutron star mass since it is proportional to \(R^{5}/M^{5}\) of the neutron star. Therefore, the \(\Lambda\) from the DDME1 is relatively larger. The \(\Lambda_{1.4}\) from the reported machine learning framework completely satisfies the measurements from the gravitational wave detection. Table 2 lists the properties of neutron stars fitted by the DNN with nonparametric training data: namely, the maximum masses of neutrons stars, the corresponding radii, the radii at \(1.4M_{\odot}\) and \(2.08M_{\odot}\), and the dimensionless tidal deformability at \(1.4M_{\odot}\) with 68% and 95% confidence levels in schemes 1 and 2. These variables were compared to the results from the DDME1 parameter set. Both of these two schemes can generate the massive neutron star with a mass close to \(2.55M_{\odot}\). The radius of the \(1.4M_{\odot}\) neutron star is around 12.30 km, which is consistent with the value extracted from the GW170817 of \(R_{1.4}=11.9\pm 1.4\) km (Abbott et al., 2019). The radius of \(2.08M_{\odot}\) neutron star is fitted around 12.0 km now. The radius and mass of PSR J0740+6620 were analyzed as \(12.39^{+1.30}_{-0.98}\) km and \(2.072^{+0.067}_{-0.066}M_{\odot}\), from Figure 8: \(\Lambda\)-\(M\) relation, generated by the fitted EOSs and compared to that from DDME1 and the values extracted from GW170817 events. Figure 7: The mass-radius relation of neutron star from the nonparametric machine learning method, the observation distributions from 14 neutron stars, the masses of massive neutron stars, and the radii constraints from the NICER. NICER, by Riley et al. (Riley et al., 2021). The results from the two schemes are similar, with differences are less than 2%. It can be found that present fits about the properties of the neutron are comparable with those generated by model-informed priors in the works of Landry and Essick (Landry and Essick, 2019), while they are much more constrained than the ones from model-agnostic prior. It is because our training data is just prepared to reproduce the theoretical EOSs, while the possibility that the EOS might be quite different from current theoretical fits was considered in model-agnostic prior. Finally, the \(M\)-\(R\) relations from the two schemes to generate the training set, were compared and given in Fig. 9. Their behaviors are quite similar. The only difference is that the radii of the neutron stars and the uncertainties from scheme 2 are a little larger than those of scheme 1 because of the influence of the DDME1 set. This demonstrates that the fitted EOSs in the present framework is strongly independent of the choice of initial training data values using the GPR method. ## 4 Summaries and perspectives A nonparametric methodology has been proposed to infer the EOSs of neutron star matter from recent observations of the neutron stars. A DNN was designed to map the mass-radius observables to the energy-pressure relation of dense matter. The GPR method was applied to construct the EOSs, and this method was completely independent of any apparent function form. To generate the training data set, two schemes of the example data were adopted to provide the initial EOS. The mean values and variances of EOSs from nine successful relativistic mean-field model parameter sets were considered in the first scheme; whereas in the second, the mean value was chosen from the DDME1 set and the derivation was fixed as 0.3. A 5-million training data set was constructed by including the uncertainties in the mass and radius of neutron stars. Furthermore, in the training set, the constraints of the massive neutron star and the mass-radius simultaneous measurements were also taken into account in the training set. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & C. L. & \(M_{max}[M_{\odot}]\) & \(R_{max}\) [km] & \(R_{1.4}\) [km] & \(R_{2.08}\) [km] & \(\Lambda_{1.4}\) \\ \hline DDME1 & & 2.45 & 11.83 & 12.99 & 12.98 & 692 \\ \hline \multirow{4}{*}{scheme 1} & 68\% & \(2.38^{+0.07}_{-0.07}\) & \(11.07^{+0.16}_{-0.17}\) & \(12.31^{+0.15}_{-0.16}\) & \(11.95^{+0.23}_{-0.23}\) & \(459^{+37}_{-46}\) \\ \cline{2-7} & 95\% & \(2.38^{+0.15}_{-0.13}\) & \(11.07^{+0.34}_{-0.32}\) & \(12.31^{+0.29}_{-0.31}\) & \(11.95^{+0.44}_{-0.47}\) & \(459^{+82}_{-81}\) \\ \hline \multirow{4}{*}{scheme 2} & 68\% & \(2.41^{+0.08}_{-0.07}\) & \(11.15^{+0.21}_{-0.20}\) & \(12.30^{+0.17}_{-0.19}\) & \(12.03^{+0.27}_{-0.27}\) & \(448^{+55}_{-43}\) \\ \cline{2-7} & 95\% & \(2.41^{+0.15}_{-0.14}\) & \(11.15^{+0.41}_{-0.39}\) & \(12.30^{+0.35}_{-0.37}\) & \(12.03^{+0.53}_{-0.54}\) & \(448^{+110}_{-86}\) \\ \hline \hline \end{tabular} \end{table} Table 2: The maximum masses of neutrons star, the corresponding radii at \(1.4M_{\odot}\) and \(2.08M_{\odot}\), and the dimensionless tidal deformability at \(1.4M_{\odot}\) from the nonparametric EOS models with 68% and 95% confidence levels in scheme 1 and 2 and compared to those from DDME1. Figure 9: The \(M\)-\(R\) relation comparisons between two schemes with 95% confidence interval and the constraints from the massive neutron star and NICER. One hundred independent NN models were generated with different training data sets, producing one hundred EOSs of a neutron star. These were analyzed with the standard statistical method and EOSs with the 68% and 95% confidence levels were obtained. They were softer when compared with the join constraints from the GW170817 and GW190814 events. The mass-radius relations from our fitted EOSs fully satisfy the present various astronomical observations of neutron stars. The dimensionless tidal deformability at \(1.4M_{\odot}\) was also consistent with the data extracted from the GW170817. Finally, concerning the creation of training data, the results from both schemes were almost identical. This shows that the present fitted EOSs are strongly independent of the initial set of training data set. Our nonparametric NN framework can be naturally extended to other supervised learning fields to avoid the limitations of specific function forms. In the future, the original data on the gravitational wave from the binary neutron star will be included in the input layer to simulate the observations more realistically. The hadron-quark phase transition was excluded in the present training data set, and this too will be considered in future work. ## 5 Acknowledgments This work was supported in part by the National Natural Science Foundation of China (Grant Nos. 11775119 and 12175109), and the Natural Science Foundation of Tianjin (Grant No: 19JCYBJC30800). We are grateful to the referee for his constructive comments and suggestions.
2302.02924
Dropout Injection at Test Time for Post Hoc Uncertainty Quantification in Neural Networks
Among Bayesian methods, Monte-Carlo dropout provides principled tools for evaluating the epistemic uncertainty of neural networks. Its popularity recently led to seminal works that proposed activating the dropout layers only during inference for evaluating uncertainty. This approach, which we call dropout injection, provides clear benefits over its traditional counterpart (which we call embedded dropout) since it allows one to obtain a post hoc uncertainty measure for any existing network previously trained without dropout, avoiding an additional, time-consuming training process. Unfortunately, no previous work compared injected and embedded dropout; therefore, we provide the first thorough investigation, focusing on regression problems. The main contribution of our work is to provide guidelines on the effective use of injected dropout so that it can be a practical alternative to the current use of embedded dropout. In particular, we show that its effectiveness strongly relies on a suitable scaling of the corresponding uncertainty measure, and we discuss the trade-off between negative log-likelihood and calibration error as a function of the scale factor. Experimental results on UCI data sets and crowd counting benchmarks support our claim that dropout injection can effectively behave as a competitive post hoc uncertainty quantification technique.
Emanuele Ledda, Giorgio Fumera, Fabio Roli
2023-02-06T16:56:53Z
http://arxiv.org/abs/2302.02924v1
# Dropout Injection at Test Time for Post Hoc Uncertainty Quantification in Neural Networks ###### Abstract Among Bayesian methods, Monte-Carlo dropout provides principled tools for evaluating the _epistemic_ uncertainty of neural networks. Its popularity recently led to seminal works that proposed activating the dropout layers only during inference for evaluating uncertainty. This approach, which we call dropout _injection_, provides clear benefits over its traditional counterpart (which we call _embedded_ dropout) since it allows one to obtain a post hoc uncertainty measure for any existing network previously trained without dropout, avoiding an additional, time-consuming training process. Unfortunately, no previous work compared injected and embedded dropout; therefore, we provide the first thorough investigation, focusing on regression problems. The main contribution of our work is to provide guidelines on the effective use of injected dropout so that it can be a practical alternative to the current use of embedded dropout. In particular, we show that its effectiveness strongly relies on a suitable scaling of the corresponding uncertainty measure, and we discuss the trade-off between negative log-likelihood and calibration error as a function of the scale factor. Experimental results on UCI data sets and crowd counting benchmarks support our claim that dropout injection can effectively behave as a competitive post hoc uncertainty quantification technique. keywords: Uncertainty Quantification, Epistemic Uncertainty, Monte Carlo Dropout, Trustworthy AI, Crowd Counting + Footnote †: journal: Computer Science ## 1 Introduction Systems based on Artificial Intelligence are nowadays widespread in industry, workplaces and everyday life. They often operate in critical contexts where errors can cause considerable harm to humans (e.g., self-driving cars), where transparency and accountability are required (e.g., medical and legal domains), and where end users need to be aware of the uncertainty/reliability of AI predictions (e.g., intelligent video surveillance). In particular, given that AI moved out of research laboratories to real-world scenarios, the need for trustworthiness guarantees led to the growth of an entire research field devoted to quantifying _uncertainty_ of machine learning models [1; 2]. In this field, it is common to distinguish between two possible sources of uncertainty: one caused by intrinsic data noise (_aleatoric_ uncertainty, or data uncertainty) and one caused by the lack of knowledge on the "correct" prediction model (_epistemic_ uncertainty, or model uncertainty) [3]. In particular, most of the work on deep neural networks uses their Bayesian extension for quantifying _uncertainty_, modeling the network weights with a probability distribution [4; 5]. The simple and effective Monte Carlo Dropout technique [6] is one of the leading and most widely used techniques for evaluating _epistemic_ uncertainty according to the Bayesian extension of neural networks. It is based on _dropout_, a stochastic regularization technique designed for neural networks, consisting of randomly dropping out some neural units and their associated connections during training, with a predefined probability [7]. The main idea behind Monte Carlo Dropout is to exploit the weight randomization induced by the dropout layers by keeping them active during inference; this allows one to approximate the distribution of the neural network weights with respect to training data. However, whereas dropout has been originally designed as a stochastic regularization technique to be used only during training, when it is used for uncertainty quantification its purpose is to estimate the weight distribution, and it is no more conceived as a regularizer. This suggests activating dropout layers during training may not be necessary if dropout is used only for uncertainty quantification. Accordingly, a recent work by Loquercio et al. [8] proposed to activate dropout layers for uncertainty quantification only during inference, instead of using it also during training. The latter approach, which we call dropout _injection_, appears very interesting, as it would allow the _integration_ of a measure of epistemic uncertainty into _any_ network that was previously trained with no dropout, without the computational burden of a further training procedure required by the original Monte Carlo Dropout, which we call _embedded_ dropout, nor the need to optimize the related hyper-parameters. Loquercio et al. [8] provided empirical evidence that injected dropout, combined with assumed density filtering [9] for aleatoric uncertainty evaluation, can be an effective tool for evaluating the _predictive_ uncertainty of a predictor. However, no previous work, including the one by Loquercio et al. [8], provided a thorough analysis of injected dropout as an epistemic uncertainty evaluation technique, nor a comparison with classical embedded dropout, in order to provide guidelines for its effective use. Based on the above motivations, in this work we carry out a deep investigation of injected dropout, focusing on regression problems. First, our analysis shows that its effectiveness critically relies on a suitable scaling of the corresponding uncertainty measure, differently from embedded dropout. To this aim, we extend the original formulation by Loquercio et al. [8] to introduce and compute a suitable scaling factor [10]. We also address the re Figure 1: A schematic diagram of the three main steps that we propose to effectively implement injected dropout: 1) select a neural network trained for a regression problem without dropout (network output: \(\hat{y}\in\mathcal{R}\)); 2) add dropout layers to the network; 3) compute the dropout rate \(\Phi\) and the scaling factor \(C\) of the uncertainty measure by solving the optimization problem in (9) for obtaining the scaled measure \(\xi^{2}\). sulting issue of achieving a trade-off between different aspects related to the quality of the corresponding uncertainty measure, i.e., negative log-likelihood and calibration error, as a function of the scale factor. We then carry out experiments on eight UCI data sets as well as five benchmarks for crowd counting, which is a challenging computer vision task, to thoroughly evaluate the behavior of injected dropout in terms of prediction accuracy and the quality of the corresponding uncertainty measure, providing experimental evidence of its effectiveness over embedded dropout. Our results shed light on the practical operation of injected dropout and show that it can effectively evaluate epistemic uncertainty at test time without a time-consuming training process with dropout layers. In the following, we present background concepts on Bayesian methods for uncertainty evaluation and Monte Carlo Dropout in Sect 2, a theoretical analysis of injected dropout and the proposed method for injected dropout in Sect. 3, and experimental results in Sect. 4. Discussion and conclusions are given in Sect. 5, where future work on this topic is also discussed. ## 2 Background Concepts We first summarise the main concepts of uncertainty quantification in neural networks based on Bayesian methods, then we focus on Monte Carlo dropout for epistemic uncertainty quantification. ### Uncertainty Evaluation through Bayesian Inference To obtain a network capable of providing predictions with an associated level of uncertainty, one possible approach is a Bayesian extension. In particular, for a regression problem, a traditional neural network implements a predictor \(\hat{y}=f(x;w)\in\mathbb{R}\), where \(w\) denotes the connection weights. Bayesian neural networks output a full distribution instead of a point prediction \(\hat{y}\), which is usually modeled with a Gaussian distribution \(\mathcal{N}(\hat{\mu},\hat{\sigma}^{2})\), albeit it is not the only possible choice (e.g., Gaussian Mixture distributions or Generalized Gaussian distributions have also been proposed [11; 12]). In this case, the mean \(\hat{\mu}\) is interpreted as the point prediction \(\hat{y}\), whereas the variance \(\hat{\sigma}^{2}\) represents the desired uncertainty measure; ideally, one wants \(\hat{\mu}\) and \(\hat{\sigma}^{2}\) to approximate well the _true_ target distribution \(\mathcal{N}(\mu,\sigma^{2})\). Under a Bayesian framework, the full probability distribution of the target variable \(y\) for an input query \(x\) is also conditioned on the training set \(\mathcal{D}\): \[p(y|x,\mathcal{D})=\int_{w}p(y|x,w)p(w|\mathcal{D})\mathrm{d}w. \tag{1}\] This is equivalent to taking the expectation under a posterior on the weights, \(p(w|\mathcal{D})\), that provides the desired predictive distribution. However, since obtaining \(p(w|\mathcal{D})\) is intractable for most neural networks, an approximation \(q(w)\) is usually considered. By substituting the true distribution with its approximation, the posterior can be computed as: \[p(y|x,\mathcal{D})\approx\int_{w}p(y|x,w)q(w)\mathrm{d}w. \tag{2}\] A suitable approximation \(q(w)\) is sought among a set of possible candidates by minimizing its Kullback-Leibler divergence (KL-divergence) \(KL[p(w|\mathcal{D})||q(w)]\) with the real distribution. From variational inference, it is known that the KL-divergence can be minimised by minimizing the negative log-likelihood (NLL), that for regression problems is approximated on a given data set of \(N\) samples as: \[\frac{1}{N}\sum_{n=1}^{N}\frac{1}{2}\frac{(y_{n}-\hat{y}_{n})^{2}}{\hat{\sigma }_{n}^{2}}+\frac{1}{2}\log(\hat{\sigma}_{n}^{2}). \tag{3}\] ### Monte Carlo Dropout as a Bayesian Approximation To model an approximated distribution \(q(w)\), one may consider incorporating a stochastic component inside a network. One possible way to introduce randomness on the network weights to estimate a full probability distribution over them is to use Stochastic Regularization Techniques [2], originally proposed for network regularisation. In particular, it has been shown both theoretically [6] and practically [6; 13; 3] that dropout can be used to this aim. The standard dropout envisages the deactivation of each neuron belonging to a specific network layer with a probability, named _dropout rate_, that follows the Bernoulli distribution \(\mathcal{B}(\Phi)\), where the parameter \(\Phi\) corresponds to the dropout rate. The dropout layers are kept active during training and are deactivated at inference time, which produces a regularisation effect. Unlike standard dropout, Monte Carlo dropout keeps the dropout layers active at inference time; this allows one to derive a full probability distribution \(q(w|\Phi)\) approximating \(p(w|\mathcal{D})\). The optimal dropout rate \(\Phi\) is the one that minimizes the divergence between the real and the approximated distributions; in practice, a suitable \(\Phi\) is found by minimizing an approximation of NLL computed, e.g., on a validation set of \(N\) samples: \[\Phi=\arg\min_{\varphi}\frac{1}{N}\sum_{n=1}^{N}\frac{1}{2}\frac{(y_{n}-\hat{y} _{n}(\varphi))^{2}}{\hat{\sigma}_{n}^{2}(\varphi)}+\frac{1}{2}\log(\hat{ \sigma}_{n}^{2}(\varphi)). \tag{4}\] Once such a value of \(\Phi\) and the trained network are obtained, one can use the approximate distribution \(q(w|\Phi)\) for computing a prediction and associated predictive uncertainty, modeled as the mean and the variance of the posterior distribution: \[p(y|x,\mathcal{D})\approx\frac{1}{T}\sum_{t=1}^{T}p(y|x,w_{t})\sim\mathcal{N} (\hat{\mu};\hat{y},\hat{\sigma}^{2})\, \tag{5}\] where \(T\) denotes the number of Monte Carlo samples obtained by querying the network multiple times with the same sample \(x\). Two widely used techniques for solving the optimization problem (4) are grid search [6] and concrete dropout [13]. In embedded dropout, after solving (4) using a training and a validation set, one can use the obtained dropout rate in the inference stage. To improve performance, a different dropout rate can be used for each network layer [13]. On the contrary, in the injected dropout approach [8] dropout layers are added to an already-trained network, and the dropout rate \(\Phi\) is sought by minimizing the NLL (4) of the trained network, e.g., on a validation set. ## 3 The proposed method for Monte Carlo Dropout Injection A notable difference between embedded and injected dropout is that the former also performs a regularization, whereas the latter does not. This can lead to differences in the prediction error of the corresponding networks, as well as in their uncertainty measures, which we shall investigate in this section. In particular, we analyze, theoretically and empirically, how prediction error and quality of the uncertainty measure behave as a function of the dropout rate \(\Phi\), for both embedded and injected dropout (Sect. 3.1). The results of this analysis will point out an issue of injected dropout, that we propose to mitigate by suitably _scaling_ the uncertainty measure; to this aim, we propose an extension of the original formulation in Eq. (4), using the \(\sigma\)-scaling technique [10] taking into account how the scaling factor affects the calibration of the uncertainty measure(Sect. 3.2). For simplicity, we shall consider a fixed dropout rate across all network layers, but all our results apply to the most general case when different dropout rates are allowed. ### Analysis of Monte Carlo Dropout Injection In Sect. 2.2 we mentioned that the dropout rate \(\Phi\) is usually chosen by minimizing the NLL, both in embedded and injected dropout. Now, let us consider the weights vector \(w\) of a neural network: one can obtain \(w\) by using a standard optimization process that may or may not involve the activation of dropout layers during _training_ for regularizing the network weights (i.e., embedded or injected dropout, respectively). Dropout layers are then activated at _inference_ time, both when performing embedded and injected dropout, with dropout rate \(\varphi\), in order to obtain a so-called _Monte-Carlo iteration_, i.e. a prediction taken using the network with stochastic weights deactivation. This is equivalent to applying a binary mask \(m(\varphi)=diag(z_{1}(\varphi)\ldots z_{L}(\varphi))\) - where \(z_{\ell}(\varphi)\sim\mathcal{B}(\varphi)\) and \(L\) denotes the size of \(w\) - to the network weights. Applying \(T\) stochastic iterations is equivalent to applying a series of \(T\) masks, each one denoted by \(m_{t}(\varphi)\), to obtain a series of instances of the network weights, \(w_{1}(\varphi)\ldots w_{T}(\varphi)\) with \(w_{t}(\varphi)=w\cdot m_{t}(\varphi)\), where \(\cdot\) denotes the matrix product. It is easy to see that \(w_{t}(\varphi)\) always depends upon \(\varphi\) through the binary mask \(m_{t}(\varphi)\), both for embedded and injected dropout. We can now rewrite Eq. (5) for modelling the prediction \(\hat{y}\) and the corresponding uncertainty \(\hat{\sigma}^{2}\) of a given instance \(x\) when using a neural network with dropout rate \(\varphi\): \[\frac{1}{T}\sum_{t=1}^{T}p(y|x,w_{t}(\varphi))=\frac{1}{T}\sum_{t=1}^{T}f(x;w _{t}(\varphi))\sim\mathcal{N}(\hat{\mu};\hat{y},\hat{\sigma}^{2})\, \tag{6}\] where \(f(x;w_{t}(\varphi))\) denotes the prediction of the network parametrized with \(w_{t}(\varphi)\) on the instance \(x\). With abuse of notation, let us denote with \(x\), \(y\), \(\hat{y}\) and \(\hat{\sigma}^{2}\) the _vectors_ of the instances, their ground truths, and the corresponding predictions and uncertainty measures, respectively, of a data set \(\mathcal{D}\) of size \(N\). It is easy to see that, as the dropout rate changes, the prediction vector \(\hat{y}=[\hat{y}_{1}\ldots\hat{y}_{N}]\) can change, and as a consequence also the quadratic error vector \(\epsilon^{2}=(y-\hat{y})^{2}\) and the uncertainty vector \(\hat{\sigma}^{2}=[\hat{\sigma}_{1}^{2}\ldots\hat{\sigma}_{N}^{2}]\) will change. In particular, choosing the dropout rate \(\Phi\) by solving the optimization problem (4) amounts to pursuing two goals: (i) minimizing the quadratic error \(\epsilon_{n}^{2}=(y_{n}-\hat{y}_{n})^{2}\); (ii) minimizing the absolute difference between the prediction's quadratic error and the variance, \(|\epsilon_{n}^{2}-\hat{\sigma}_{n}^{2}|\). It is easy to show that the "best" trade-off between them according to Eq. (4), i.e., the minimum NLL, is achieved when the following condition holds (a simple proof is reported in A): \[\hat{\sigma}_{n}^{2}=(y_{n}-\hat{y}_{n})^{2}\quad\forall n\in\mathcal{D}. \tag{7}\] A significant difference can now be noticed in the behavior of \(\epsilon^{2}\) and \(\hat{\sigma}^{2}\), as a function of the dropout rate \(\varphi\), between embedded and injected dropout. As we already pointed out, when injecting dropout the choice of \(\varphi\) does not affect the network weights \(w\); in embedded dropout, instead, changing \(\varphi\) results in explicitly modifying the vector \(w\). Under the assumption of not having an under-parametrization, the influence of \(\varphi\) will lead to a regularization effect on \(w\) when using embedded dropout, but not when using injected dropout: since a network with dropout regularization is usually more robust to dropping network weights, it can be expected that a lower quadratic error is attained when using embedded dropout. Moreover, when using injected dropout we argue that, as \(\varphi\) increases, prediction quality quickly deteriorates, i.e., the quadratic error increases. This behavior penalizes injected dropout, limiting to a tiny range \([0,\varphi_{\text{max}}]\), for some \(\varphi_{\text{max}}\), the values of dropout rate \(\varphi\) that lead to acceptably small values of the quadratic error \(\epsilon^{2}\). However, for very small values of dropout rate, the variability induced on the network predictions, and thus \(\hat{\sigma}^{2}\), are so small that the resulting NLL is very high, which in turn is an indicator of poor quality of the uncertainty measure. Therefore, the range of "useful" values of \(\varphi\) further reduces to \([\varphi_{\text{min}},\varphi_{\text{max}}]\), for some value \(\varphi_{\text{min}}\). We shall provide experimental evidence of this behavior in Sect. 4.2. In the next section, we propose to address the above issue of injected dropout through a suitable rescaling of its uncertainty measure. ### The Proposed Injection Method The solution we propose to mitigate the above limitation of injected dropout pointed out in Sect. 3.1 is summarized in Fig. 1 and consists in _rescaling_ the uncertainty measure \(\hat{\sigma}^{2}\). **Rescaling the uncertainty measure** - Let us consider a rescaled uncertainty measure \(\xi^{2}(\Phi)=C\cdot\hat{\sigma}^{2}(\Phi)\), where \(C\in(0,+\infty)\). A suitable value of \(C\) can be sought by optimizing the NLL as a function of \(C\), besides \(\Phi\), through the following modification of the optimization problem (4): \[\Phi,C=\arg\min_{\varphi,c}\frac{1}{N}\sum_{n=1}^{N}\frac{1}{2}\frac{(y_{n}- \hat{y}_{n}(\varphi))^{2}}{c\cdot\hat{\sigma}_{n}^{2}(\varphi)}+\frac{1}{2} \log(c\cdot\hat{\sigma}_{n}^{2}(\varphi)). \tag{8}\] We argue that such a scaled measure can lead to a much better NLL value than the original formulation (4). To solve the optimization problem (8) it is possible to exploit the results by Laves et al. [10], which show that, given a trained regressor providing a prediction \(\hat{y}\) and a corresponding uncertainty measure \(\hat{\sigma}^{2}\), one can further reduce the NLL with respect to the solution of (3) by rescaling \(\hat{\sigma}^{2}\) as \(C\cdot\hat{\sigma}^{2}\), and that the optimal value of the scale factor \(C\), for any _given_ quadratic error and uncertainty measure vectors \(\epsilon_{n}^{2}\) and \(\hat{\sigma}_{n}^{2}\), is analytically given by: \[C=\frac{1}{N}\sum_{n=1}^{N}\frac{\epsilon_{n}^{2}}{\hat{\sigma}_{n}^{2}}. \tag{9}\] Accordingly, the optimal scale factor can be interpreted as an intrinsic indicator of how much the magnitude of the uncertainty vector is close to the magnitude of the quadratic error: if \(C>1\), the uncertainty measure \(\hat{\sigma}_{n}\) is Figure 2: These plots (taken from experiments in Sect. 4.2) point out the difference between the scale-agnostic optimization process of Eq. (4) (left) and the scale-aware one of Eq. (8) (right). The latter can potentially lead to different dropout rates with a lower NLL than the former, thanks to a rescaling of the uncertainty measure. smaller, on average (over the \(N\) samples), than the quadratic error \(\epsilon_{n}\); on the contrary, if \(C<1\), then \(\hat{\sigma}_{n}\) is on average larger than \(\epsilon_{n}\). Finally, since the scale factor is analytically known for a given uncertainty measure, Eq. (9) can be exploited to rewrite \(C\) as a function of \(\Phi\) in the optimization problem (8), which can therefore be simplified as: \[\Phi=\arg\min_{\varphi}\frac{1}{N}\sum_{n=1}^{N}\frac{1}{2}\frac{(y_{n}-\hat{ y}_{n}(\varphi))^{2}}{C(\varphi)\cdot\hat{\sigma}_{n}^{2}(\varphi)}+\frac{1}{ 2}\log(C(\varphi)\cdot\hat{\sigma}_{n}^{2}(\varphi)). \tag{10}\] Accordingly, both the dropout rate and the scale factor can be found by optimizing the NLL only with respect to the dropout rate, similarly to the original formulation (4). When introducing a scale factor, the values of the unscaled uncertainty measure \(\hat{\sigma}_{n}^{2}\) do not need anymore to be close to the corresponding quadratic error \(\epsilon_{n}^{2}\) to keep the NLL small, and are therefore not penalized as in the original formulation (4). We point out that, although rescaling can also be applied to embedded dropout, it is much more advantageous for injected dropout to mitigate the issue described in Sect. 3.1, which instead does not affect embedded dropout to a significant extent. In particular, such mitigation is expected to be more beneficial for large data sets, complex tasks, and networks with a large number of weights: we shall provide experimental evidence of this fact in Sect. 4.2. As shown in Fig. 2, the proposed formulation (10) potentially allows one to explore regions of the NLL surface (considered as a function of both the dropout rate and the scale factor), that would otherwise be inaccessible by the original formulation (4), leading to a better NLL. **Calibration of the uncertainty measure as a function of the scale factor** - So far, we have focused on NLL minimization, since this allows one to minimize the distance between the actual prior distribution of the weights, \(p(w)\), and the approximated one, \(q(w)\). However, the quality of an uncertainty measure also depends on its _calibration_, and minimizing NLL does not guarantee to obtain a well-calibrated measure. Given the relevance of the scaling factor to injected dropout, it is interesting to investigate how it affects the trade-off between minimizing NLL and obtaining a well-calibrated measure. To this aim, we first summarise the notion of calibration of an uncertainty measure for regression problems. As mentioned in Sect. 2, the distribution \(p(y|x,\mathcal{D})\) is assumed to be Normal, \(\mathcal{N}(\mu,\sigma^{2})\), as well as its approximation in Eq. (5), \(\mathcal{N}(\hat{\mu},\hat{\sigma}^{2})\); to make the latter as close as possible to the former, one minimizes the NLL on a validation set. Considering \(\mu\) and \(\sigma^{2}\) as the point prediction and the corresponding uncertainty for a given instance \((x,y)\), one can construct a confidence interval with probability \(\alpha\) for a generic sample \((x,y)\): \[CI_{\alpha}=[\mu-z_{\alpha}\cdot\sigma^{2},\mu+z_{\alpha}\cdot\sigma^{2}]\, \tag{11}\] which is characterized by the following: \[p(y\in CI_{\alpha})=\alpha,\quad\forall\alpha\in[0,1]. \tag{12}\] One would like that Eq. (12) holds for any instance \((x,y)\) also with respect to a confidence interval \(\hat{CI}_{\alpha}\) computed from the estimated distribution: \[\hat{CI}_{\alpha}=[\hat{\mu}-z_{\alpha}\cdot\hat{\sigma}^{2},\hat{\mu}+z_{ \alpha}\cdot\hat{\sigma}^{2}]. \tag{13}\] If so, the uncertainty measure \(\hat{\sigma}^{2}\) is said to be perfectly calibrated. This is, however, not guaranteed in practice. To empirically evaluate the calibration of an uncertainty measure, the probability \(p(y\in CI_{\alpha})\) can be estimated on a data set (referred to as validation [14] or calibration set [15]), as the fraction of instances belonging to the confidence interval \(\hat{CI}_{\alpha}\), for several values of \(\alpha\). We call this fraction the _observed frequency_, and denote it by \(\pi(\alpha)\); ideally, \(\pi(\alpha)\) should be equal to \(\alpha\), which we call the _expected frequency_. Calibration quality can therefore be evaluated through a _calibration curve_, i.e., a plot of the observed frequency as a function of the expected one, as in the example shown in Fig. 3. Note that a perfect calibration corresponds to a calibration curve equal to the bisector of the X-Y axes. When the observed frequency \(\pi(\alpha)\) is below the bisector, the uncertainty measure is _overconfident_ for the corresponding values of \(\alpha\) (because the expected frequency is greater than the observed one); on the contrary, where \(\pi(\alpha)\) is above the bisector, the uncertainty measure is _underconfident_.1 Footnote 1: Note that the calibration curve can cross the bisector for zero, one or more points, besides the ones corresponding to \(\alpha=0\) and \(\alpha=1\). Calibration quality can now be quantitatively evaluated as a scalar measure, in terms of "how far" the calibration curve is from the ideal one. In this work, we employ the area subtended by the calibration curve with respect to the bisector, known in the literature by the name of Miscalibration Area (MA) [16], which corresponds to the shaded regions in Fig. 3: \[\mathrm{MA}=\int_{0}^{1}|\pi(\alpha)-\alpha|\mathrm{d}\alpha. \tag{14}\] In practice, the MA can be empirically estimated from a validation set as the average difference in absolute value between the expected and the observed frequency, over a set of \(M\) values of \(\alpha\): \[\mathrm{MA}=\frac{1}{M}\sum_{m=1}^{M}|\pi(\alpha_{m})-\alpha_{m}|. \tag{15}\] As mentioned above, the scale factor \(C\) obtained by solving problem (10) does not guarantee that the corresponding MA is null, i.e., perfect calibration. To analyze the trade-off between MA and NLL that can be obtained as a function of the scale factor, we devised an iterative procedure that, starting from the obtained scale factor \(C\), modifies it to reduce MA as much as possible, by increasing or reducing the confidence of each prediction of the calibration set, depending on whether the uncertainty measure is over- or under-confident on that prediction, respectively. Our procedure (reported as Algorithm 1) aims at reducing to zero the average difference between the expected and the observed frequency (which we refer to as \(Balance\)) as a proxy for minimizing the MA. In such a setting, the value of \(Balance\) depends upon the chosen scale factor \(c\); we denote as \(Balance_{c}\) such dependence, which is also reflected on the observed frequency \(\pi_{c}(\alpha)\). Since \(Balance_{c}\) is a monotonically non-decreasing function of \(c\), one can empirically construct a suitable interval \([C_{\mathrm{low}},C_{\mathrm{high}}]\) around the scaling factor \(C\) obtained by solving problem (10), such that \(Balance_{C_{\mathrm{low}}}\) is negative, and \(Balance_{C_{\mathrm{high}}}\) is positive; then, our algorithm uses the bisection method to find a "relaxed" scaling factor \(C_{\mathrm{r}}\) such that \(Balance_{C_{\mathrm{r}}}=0\) (in practice, the objective is defined as \(Balance_{C_{\mathrm{r}}}<\tau\), where \(\tau\) is a given tolerance threshold). By analyzing the "path" of the scale factor between \(C\) and \(C_{\mathrm{r}}\), i.e., the corresponding values of NLL and ME, it is possible to explore the trade-off between these two measures. Although acting on the scale factor does not guarantee to reduce MA to arbitrarily small values (i.e., to approach perfect calibration), empirical results reported in Sect. 4 show that small enough MA values can nevertheless be obtained. Interestingly, our results also show that the corresponding increase of NLL can be very limited: therefore, although standard post-hoc techniques can be applied (e.g., Kuleshov calibration [15]) to enhance the calibration quality, the above result suggests that, when using injected dropout, a good trade-off between minimizing NLL and obtaining a well-calibrated uncertainty measure can also be achieved by first computing the dropout rate and the corresponding scale factor through problem (10), and then suitably adjusting the latter. ## 4 Experiments We carried out two groups of experiments. We first compared the behavior of embedded and injected dropout on eight data sets related to regression problems taken from the UCI repository2, using a multi-layer feed-forward network as the regression model. In the second group of experiments, we evaluated injected dropout on five benchmark data sets for crowd counting [17], which is a challenging computer vision application using a state-of-the-art CNN architecture for this task. In both experiments, we implemented injected dropout by setting the dropout rate and scaling factor according to Eq. (10), using a validation set; in the experiments on crowd counting we also considered the relaxed scaling factor obtained as described in Sect. 3.2 on the same validation set. We evaluated both embedded and injected dropout, as a function of the dropout rate, in terms of the three metrics considered in Sect. 3: root mean squared error (RMSE), NLL, and MA. In particular, we evaluated NLL and MA using both the unscaled values \(\hat{\sigma}^{2}(\varphi)\) and the scaled ones \(\xi^{2}(\varphi)\); for reference, we also computed the "ideal" uncertainty measure of Eq. (7), defined as the error vector \(\epsilon^{2}\) computed on the _validation set_ ### Experiments on UCI Data Sets We selected eight data sets related to several multivariate regression problems that have been used in the literature to evaluate uncertainty measures: Protein-Tertiary-Structure, Boston-Housing, Concrete, Energy, Kin8nm, Power-Plant, Wine-Quality-Red, and Yacht. We adopted the same set-up proposed by Hernandez-Lobato et al. (2018), described in the following. We also open-sourced the code for the UCI experiment reproducibility at the following link [https://github.com/EmanueleLedda97/Dropout_Injection](https://github.com/EmanueleLedda97/Dropout_Injection). **Experimental set-up -** We first distinguished between the challenging Protein-Tertiary-Structure data set and the remaining simpler ones. We used Figure 3: Example of a calibration curve, which plots the observed frequency of instances belonging to the confidence interval \(CI_{\alpha}\) with probability \(\alpha\) as a function of the respective expected frequency. In this example, an under-confidence and lower-confidence regions are present (see the text for their meaning). a one-hidden-layer network with 50 hidden units for the latter and 100 for the former. We randomly split each data set into a training set made up of 90% of the original instances and a testing set containing the remaining 10%. This procedure is repeated five times for Protein-Tertiary-Structure and 20 times for the other data sets. In each split, we used 20% of the training partition as a validation set, which is crucial not only for proper hyper-parameter tuning but also - and primarily - for scaling the uncertainty measure. Differently from the original set-up [18], we used a mini-batch stochastic gradient descent; we empirically tuned the batch size, the learning rate, and the number of epochs on the validation set. We considered 15 possible dropout rates within the interval \([0.001,0.5]\). We chose these rather extreme values to properly investigate the issue mentioned in Sect. 3.2, where we hypothesized a significant deterioration of the NLL of the unscaled measure for such values of dropout rate. **Experimental results -** Results are reported in Fig. 4. We can first notice that the RMSE of embedded dropout tends to deteriorate as the dropout rate increases. This behavior suggests that the network's capacity is very low and that the regularization effect of dropout leads to under-fitting. This over-penalization makes injected dropout more accurate on average, in terms of average RMSE and standard deviation, although a different behavior can be observed on different data sets. Another phenomenon we can observe is that, for the smallest dropout rates, the RMSE is low, but the unscaled NLL is almost always severely affected. This trend reveals that - as we hypothesized in Sect. 3.2 - unscaled uncertainty measures suffer from the correlation between the dropout rate and the corresponding network perturbation's magnitude. When the dropout rate is extremely low, the final perturbation (and therefore the uncertainty measure) is so small that it completely disagrees with the quadratic prediction error, worsening the NLL. Fig. 5 shows that the scaled measures exhibit significantly different behavior instead. In this case, it is evident that even for extremely low dropout rates, the NLL does not deteriorate as much as the unscaled one, in line with our hypothesis in Sect. 3.2. Although the observed trends are always in favor of the injected dropout, it may be possible that this behavior was due to the use of a multi-layer network with a hidden layer of 50 or 100 units, and - as we said before - there is clear evidence that using the dropout at training time in this set-up leads to under-fitting the considered data sets. This deterioration is also visible by looking at the ideal NLL: the solid red line in Fig. 5 tends to be higher than the solid blue line, which means that the ideal uncertainty measure (which is equal to the prediction error, for each sample) is already penalized. Therefore, the observed phenomena are entirely in line with our theoretical considerations in Sect. 3.2. Looking at the calibration error in Fig. 6, we can see that dropout injection can provide well-calibrated uncertainty measurements. Analyzing the trends of the calibration error in detail, we can notice the following behaviors: first of all, the unscaled measures are clearly more sensitive to changes in dropout rate compared to their scaled counterparts. However, although the scaled measures seem much more stable, there are cases where the unscaled measure of embedded dropout performs comparably to or even outperforms Figure 4: RMSE and NLL as a function of dropout rate on UCI data sets, for injected (blue) and embedded (red) dropout. Two NLL plots are reported for each data set, related to different ranges of dropout rate: \([0.001,0.5]\) (top plot, same range as the RMSE plot) and \([0.05,0.5]\) (bottom plot), to better visualize the Y-axis scale. the scaled one; this phenomenon occurs more sporadically - and much less clearly - for injected dropout. This is a crucial issue if we consider that, for the corresponding dropout rates, a trade-off can be observed between RMSE, NLL, and MA; this suggests that the embedded dropout inherently achieves a trade-off between these metrics. A further investigation of this issue is an interesting direction for future work. ### Experiments on Crowd Counting The experiments done on UCI data sets provide useful insights about the behavior of injected vs. embedded dropout and evidence that also the former can be an effective approach to epistemic uncertainty quantification. In the Figure 5: NLL as a function of dropout rate on UCI data sets, for the scaled (dotted line) and ideal (solid line) uncertainty measures, both for injected (blue) and embedded (red) dropout. Figure 6: Calibration error as a function of dropout rate on UCI data sets, using the unscaled (dotted line) and scaled (solid line) uncertainty measures, both for injected (blue) and embedded (red) dropout. next experiments, we focused on a much more challenging computer vision problem, i.e., crowd counting. State-of-the-art CNN models solve it by first estimating the crowd density map, which is an image-to-image regression problem. **Experimental set-up -** We used three benchmark crowd counting data sets, UCSD [19], Mall [20], and PETS [21]. For the latter, we used three different partitions corresponding to different crowd scenes, named PETSview1, PETSview2, and PETSview3 [22]. We used the Multi-Column Neural Network architecture [23], a well-known state-of-the-art CNN for crowd counting. We implemented dropout injection right before each convolutional layer, except for the first one, following the suggestion of previous works on Monte Carlo dropout [13]. Unlike the previous experiments, we found an exponential increase in the RMSE using the same range of dropout rates. Therefore, we followed the suggestion by Loquercio et al. [8] to use very small dropout rates, which leads to a quasi-deterministic network. In particular, we considered ten dropout rates in the interval \([0.001,0.01]\). **Experimental results -** Fig. 7 reports the RMSE, NLL (scaled and unscaled), and MA as a function of dropout rate. Since there is a significant difference in the magnitude of the NLL for the unscaled measures and the scaled ones, we reported them into two different plots. We can observe an increase in RMSE in three of five data sets, while for two of them (PETSview1 and PETSview3), the RMSE decreased. This apparently different behavior is due to the very small dropout rates used in these experiments; we observed indeed that for higher dropout rates, the RMSE started to increase also for PETSview1 and PETSview3. Regarding the NLL, a large difference between the scaled and the unscaled measures can be observed, even in this experiment: similarly to the UCI data sets, the smallest dropout rates lead to a poor unscaled measure, whereas the scaled measure attained considerably better results. It is also possible to see how the best dropout rate values differ between scaled and unscaled measures: for example, in UCSD, the minimum NLL is around \(\Phi=0.04\) for the scaled measure whereas, for the unscaled measure, it is around \(\Phi=0.01\). Similar considerations can be made for Mall. We can also notice that the scaled measure is much more stable than the unscaled one, even with the smallest dropout rates: this can be advantageous when using injected dropout, as it allows one to make minimal changes to the network to obtain suitable measures of uncertainty. Although re-scaling performs well in terms of the NLL, the same does not hold for the calibration error: in this case, sub optimal performances can be observed, with - in most cases - a miscalibration area greater than 0.3, which is an indicator of poor calibration. Therefore, in this case, a suitable trade-off between NLL and calibration error should be sought, e.g., by tuning the scaling factor obtained from Eq. (10) through the relaxation procedure presented in Sect. 3.2. Using a relaxed version of the scaling factor allows one to attain a trade-off between RMSE, NLL, and MA, instead of minimizing only the NLL, by manually tuning the scaling factor; Fig. 8 shows the value of the three analyzed metrics as a function of dropout when using both \(C\) (cold-colored gradient) and \(C_{\mathrm{r}}\) (warm-colored gradient). It is easy to notice that when using \(C_{r}\), the NLL is not far from the values obtained using \(C\): this suggests that the relaxed measure behaves as a reasonable compromise between NLL and MA. Finally, the calibration curves in Fig. 9 clearly show that the proposed relaxation procedure effectively reduces the calibration error. In all cases, the relaxed curve is much closer to the axes' bisector, supporting our hypothesis Figure 7: RMSE, NLL, and calibration error of the injected dropout as a function of dropout rate for the crowd counting data sets. Two plots are reported for NLL, corresponding to the unscaled (top plot) and scaled (bottom plot) uncertainty measure. The plots of scaled NLL and calibration error report also a comparison with the ideal scale (the former) and with the unscaled measure (the latter) in Sect. 3.2. **Practical utility of uncertainty quantification by dropout injection in video surveillance applications** - We first show how the operator of a video surveillance system can benefit from updating its crowd density estimator software by integrating an uncertainty estimate through dropout injection (i.e., without requiring further training), and then how uncertainty evaluation could be automatically exploited to improve the accuracy of the estimated density maps and of the corresponding people count. Crowd counting by density estimation uses density maps as ground truth: the standard method for obtaining such maps starts by annotating each image location where one person's head is present, then the density map is obtained by adding a kernel (e.g., Gaussian) with a unit area centered on each head location. Therefore, the people count corresponds to the sum of each pixel of the density map. In this context, a crowd counting predictor will output an estimated density map reconstructing the original one. The image-to-image nature of this task, similar to what happens in Monocular Figure 8: Trade-off between RMSE, NLL, and MA of the injected dropout, as a function of dropout rate, on the crowd counting data sets, for the optimal scaling (warm color gradient) and the relaxed one (cold color gradient). Depth Estimation and Semantic Segmentation, implies that also the uncertainty estimate consists of a 2D array, associating a level of uncertainty to each pixel-wise predicted density (i.e., its variance). In turn, the pixel-wise uncertainty on the predicted density allows the construction of a confidence interval on the people count, as seen in the examples in Fig. 10, similarly to [17], but in our case, in a post hoc fashion. Having a notion of the confidence level associated with system predictions makes video surveillance operators aware of their reliability, allowing them to make more informed decisions and increasing their trust in the prediction system compared to a point estimator. An in-depth analysis of our experimental results on crowd counting benchmarks reveals that uncertainty evaluation could also be helpful to detect some inaccuracies in the estimated density maps that correspond to "false positive" pedestrian localization, i.e., image regions where the predicted density is not zero (thus contributing to the pedestrian count) despite the absence Figure 9: Calibration curves of injected dropout on the crowd counting data sets, attained using the unscaled measure (blue), the optimal scaling factor \(C\) (red), the relaxation \(C_{r}\) (orange). For reference, the perfect calibration line is also shown (dotted line). of pedestrians. As seen from the examples in Fig. 10, we observed that false positive localizations are often characterized by isolated regions with relatively low estimated density and high uncertainty. This suggests that they could be detected and automatically corrected (by setting to zero the corresponding values in the density map and disregarding their contribution to the people count), thus improving the accuracy of both the density map and the people count. As an example, a simple policy for automatic rejection of such regions may consist in rejecting isolated regions with high pixel-level uncertainty/prediction ratio, but other solutions are conceivable. Regardless of the policy, it is intriguing to see that such automatic rejection mechanisms are theoretically possible and practically applicable by integrating dropout injection in the toolbox of video surveillance software. A further investigation of this issue is an interesting topic for future work. ## 5 Conclusions This work provides the first investigation of injected dropout for epistemic uncertainty evaluation in order to assess if and how this method can be a practical alternative to the current use of embedded dropout which needs time-consuming uncertainty-aware training (i.e., should be designed Figure 10: Two examples of “false positive” detections enabled by uncertainty estimation on crowd-counting images (top: UCSD, bottom: PETSview2). From left to right: the original frame, ground truth, predicted density map, and uncertainty map. Each density map is normalized: darker pixels on the gradient palette ( for ground truth and predicted maps, for the uncertainty maps) correspond to lower values. “False positive” regions, corresponding to low predicted density and high uncertainty are highlighted with red boxes. at training time). Our analysis focused on regression problems and encompassed two different dimensions related to the behavior of injected dropout, namely prediction accuracy, measured in terms of RMSE, and quality of the uncertainty measure, evaluated in terms of NLL and MA (i.e., calibration). We analyzed, theoretically and empirically, how prediction error and quality of the uncertainty measure behave as a function of the dropout rate and proposed a novel injection method grounded on the results of our analysis. Experimental results on several benchmark data sets, including a challenging computer vision task, provided clear evidence that dropout injection can be an effective alternative to embedded dropout, also considering that it does not require a computationally costly re-training procedure of already-trained networks. Our results also confirm that the effectiveness of injected dropout critically depends on a suitable re-scaling of its uncertainty measure. We point out two limitations of our analysis, that can be addressed in future work: (i) We considered only regression problems; an obvious extension is to analyze dropout injection for classification problems. (ii) We considered the same dropout rate across different network layers; however, our analysis can be easily extended to the case of different dropout rates across the layers, which is known to be potentially beneficial to Monte Carlo Dropout [13]. Finally, another interesting issue for future work is to investigate the effectiveness of injected dropout for detecting out-of-distribution samples, which is potentially useful under domain shifts. ## Acknowledgments This work was partially supported by the projects: "Law Enforcement agencies human factor methods and Toolkit for the Security and protection of CROWDs in mass gatherings" (LETSCROWD), EU Horizon 2020 programme, grant agreement No. 740466; "IMaging MAnagement Guidelines and Informatics Network for law enforcement Agencies" (IMMAGINA), European Space Agency, ARTES Integrated Applications Promotion Programme, contract No. 4000133110/20/NL/AF; "Science and engineering Of Security of Artificial Intelligence" (S.O.S. AI) included in the Spoke 3 - Attacks and Defences of the Research and Innovation Program PE00000014, "SEurity and RIghts in the CyberSpace (SERICS)", under the National Recovery and Resilience Plan, Mission 4 "Education and Research" - Component 2 "From Research to Enterprise" - Investment 1.3, funded by the European Union - NextGenerationEU. Emanuele Ledda is affiliated with the Italian National PhD in Artificial Intelligence, Sapienza University of Rome. He also acknowledges the cooperation with and the support from the Pattern Recognition and Applications Lab of the University of Cagliari.
2305.13656
Link Prediction without Graph Neural Networks
Link prediction, which consists of predicting edges based on graph features, is a fundamental task in many graph applications. As for several related problems, Graph Neural Networks (GNNs), which are based on an attribute-centric message-passing paradigm, have become the predominant framework for link prediction. GNNs have consistently outperformed traditional topology-based heuristics, but what contributes to their performance? Are there simpler approaches that achieve comparable or better results? To answer these questions, we first identify important limitations in how GNN-based link prediction methods handle the intrinsic class imbalance of the problem -- due to the graph sparsity -- in their training and evaluation. Moreover, we propose Gelato, a novel topology-centric framework that applies a topological heuristic to a graph enhanced by attribute information via graph learning. Our model is trained end-to-end with an N-pair loss on an unbiased training set to address class imbalance. Experiments show that Gelato is 145% more accurate, trains 11 times faster, infers 6,000 times faster, and has less than half of the trainable parameters compared to state-of-the-art GNNs for link prediction.
Zexi Huang, Mert Kosan, Arlei Silva, Ambuj Singh
2023-05-23T03:59:21Z
http://arxiv.org/abs/2305.13656v1
# Link Prediction without Graph Neural Networks ###### Abstract. Link prediction, which consists of predicting edges based on graph features, is a fundamental task in many graph applications. As for several related problems, Graph Neural Networks (GNNs), which are based on an attribute-centric message-passing paradigm, have become the predominant framework for link prediction. GNNs have consistently outperformed traditional topology-based heuristics, but what contributes to their performance? Are there simpler approaches that achieve comparable or better results? To answer these questions, we first identify important limitations in how GNN-based link prediction methods handle the intrinsic class imbalance of the problem--due to the graph sparsity--in their training and evaluation. Moreover, we propose _Gelato_, a novel topology-centric framework that applies a topological heuristic to a graph enhanced by attribute information via graph learning. Our model is trained end-to-end with an N-pair loss on an unbiased training set to address class imbalance. Experiments show that Gelato is 145% more accurate, trains 11 times faster, infers 6,000 times faster, and has less than half of the trainable parameters compared to state-of-the-art GNNs for link prediction. Link Prediction; Graph Neural Networks; Graph Learning; Topological Heuristics + Footnote †: 10: 2018 Association for Computing Machinery. ACM ISBN 978-1-4509-30XXXX-X/18/06...$15.00 [https://doi.org/10.1145/300XXXXXXXXX](https://doi.org/10.1145/300XXXXXXXXX) 2 ## 1. Introduction Machine learning on graphs supports various structured-data applications including social network analysis [38; 60; 73], recommender systems [30; 76; 50], natural language processing [62; 85; 81], and physics modeling [15; 29; 64]. Among the graph-related tasks, one could argue that link prediction [43; 46] is the most fundamental one. This is because link prediction not only has many concrete applications [37; 41; 58] but can also be considered an (implicit or explicit) step of the graph-based machine learning pipeline [45; 47; 78]--as the observed graph is usually noisy and/or incomplete. In recent years, Graph Neural Networks (GNNs) [24; 35; 74] have emerged as the predominant paradigm for machine learning on graphs. Similar to their great success in node classification [36; 79; 95] and graph classification [51; 86; 90], GNNs have been shown to achieve state-of-the-art link prediction performance [58; 89; 88]. Compared to classical approaches that rely on expert-designed heuristics to extract topological information (e.g., Common Neighbors [52], Adamic-Adar [1], Preferential Attachment [4]), GNNs have the potential to discover new heuristics via supervised learning and the natural advantage of incorporating node attributes. However, there is little understanding of what factors contribute to the success of GNNs in link prediction, and whether simpler alternatives can achieve comparable performance--as recently found for node classification [26]. GNN-based methods approach link prediction as a binary classification problem. Yet different from other classification problems, link prediction deals with extremely class-imbalanced data due to the sparsity of real-world graphs. We argue that class imbalance should be accounted for in both training and evaluation of link prediction. In addition, GNNs combine topological and attribute information by learning topology-smoothened attributes (embeddings) via message-passing [39]. This attribute-centric mechanism has been proven effective for tasks _on_ the topology such as node classification [44], but link prediction is a task _for_ the topology, which naturally motivates topology-centric paradigms (see Figure 1). The goal of this paper is to address the key issues raised above. We first show that the evaluation of GNN-based link prediction pictures an overly optimistic view of model performance compared to the (more realistic) imbalanced setting. Class imbalance also prevents the generalization of these models due to bias in their training. Instead, we propose the use of the N-pair loss with an unbiased set of training edges to account for class imbalance. Moreover, we present _Gelato_, a novel framework that combines topological and attribute information for link prediction. As a simpler alternative to GNNs, our model applies topology-centric graph learning to incorporate node attributes directly into the graph structure, which is given as input to a topological heuristic, Autocovariance, for link prediction. Extensive experiments demonstrate that our model significantly outperforms state-of-the-art GNN-based methods in both accuracy and scalability. To summarize, our contributions are: * We scrutinize the training and evaluation of supervised link prediction methods and identify their limitations in handling class imbalance. * We propose a simple, effective, and efficient framework to combine topological and attribute information for link prediction without using GNNs. * We introduce an N-pair link prediction loss combined with an unbiased set of training edges that we show to be more effective at addressing class imbalance. ## 2. Limitations in Supervised Link Prediction Evaluation and Training Supervised link prediction is often formulated as a binary classification problem, where the positive (or negative) class includes node pairs connected (or not connected) by a link. A key difference between link prediction and typical classification problems (e.g., node classification) is that the two classes in link prediction are _extremely_ imbalanced, since most real-world graphs of interest are sparse (see Table 1). However, we find that class imbalance is not properly addressed in both evaluation and training of existing supervised link prediction approaches, as discussed below. **Link prediction evaluation.** Area Under the Receiver Operating Characteristic Curve (AUC) and Average Precision (AP) are the two most popular evaluation metrics for supervised link prediction [8, 11, 13, 34, 55, 89, 91, 97]. We first argue that, as in other imbalanced classification problems [16, 63], AUC is not an effective evaluation metric for link prediction as it is biased towards the majority class (non-edges). On the other hand, AP and other rank-based metrics such as Hits@\(k-\)used in Open Graph Benchmark (OGB) [25]--are effective for imbalanced classification _if evaluated on a test set that follows the original class distribution_. Yet, existing link prediction methods [8, 34, 55, 89, 97] compute AP on a test set that contains all positive test pairs and only an equal number of random negative pairs. Similarly, OGB computes Hits@\(k\) against a very small subset of random negative pairs. We term these approaches _biased testing_ as they highly overestimate the ratio of positive pairs in the graph. Evaluation metrics based on these biased test sets provide an overly optimistic measurement of the actual performance in _unbiased testing_, where every negative pair is included in the test set. In fact, in real applications where test positive edges are not known a priori, it is impossible to construct those biased test sets to begin with. Below, we also present an illustrative example of the misleading performance evaluation based on _biased testing_. _Example:_ Consider a graph with 10k nodes, 100k edges, and 99.9M disconnected (or negative) pairs. A (bad) model that ranks 1M false positives higher than the true edges achieves 0.99 AUC and 0.95 in AP under _biased testing_ with equal negative samples. (Detailed computation in Appendix A.) The above discussion motivates a more representative evaluation setting for supervised link prediction. Specifically, we argue for the use of rank-based evaluation metrics--AP, Precision@\(k\)[43], and Hits@\(k\)[5]--with _unbiased testing_, where positive edges are ranked against all negative pairs. These metrics have been widely applied in related problems, such as unsupervised link prediction [27, 43, 53, 92], knowledge graph completion [5, 72, 82], and information retrieval [66], where class imbalance is also significant. In our experiments, we will illustrate how these evaluation metrics combined with _unbiased testing_ provide a drastically different and more informative performance evaluation compared to existing approaches. **Link prediction training.** Following the formulation of supervised link prediction as binary classification, most existing models adopt the binary cross entropy loss to optimize their parameters [11, 13, 34, 88, 89, 91, 97]. To deal with class imbalance, these approaches downsample the negative pairs to match the number of positive pairs in the training set (_biased training_). We highlight two drawbacks of _biased training_: (1) it induces the model to overestimate the probability of positive pairs, and (2) it discards potentially useful evidence from most negative pairs. Notice that the first drawback is often hidden by _biased testing_. Instead, this paper proposes the use of _unbiased training_, where the ratio of negative pairs in the training set is the same as in the input graph. To train our model in this highly imbalanced setting, we apply the N-pair loss for link prediction instead of the cross entropy loss (Section 3.3). ## 3. Method **Notation and problem.** Consider an attributed graph \(G=(V,E,X)\), where \(V\) is the set of \(n\) nodes, \(E\) is the set of \(m\) edges (links), and \(X=(x_{1},...,x_{n})^{T}\in\mathbb{R}^{n\times r}\) collects \(r\)-dimensional node attributes. The topological (structural) information of the graph is represented by its adjacency matrix \(A\in\mathbb{R}^{n\times n}\), with \(A_{uv}>0\) if an edge of weight Figure 1. GNN incorporates topology into attributes via message-passing, which is effective for tasks _on_ the topology. Link prediction, however, is a task _for_ the topology, which motivates the design of Gelato—a novel framework that leverages graph learning to incorporate attributes into topology. \(A_{uw}\) connects nodes \(u\) and \(v\) and \(A_{uw}=0\), otherwise. The (weighted) degree of node \(u\) is given as \(d_{u}=\sum_{\theta}A_{uw}\) and the corresponding degree vector (matrix) is denoted as \(d\in\mathbb{R}^{n}\) (\(D\in\mathbb{R}^{n\times n}\)). The volume of the graph is \(\operatorname{vol}(G)=\sum_{u}d_{u}\). Our goal is to infer missing links in \(G\) based on its topological and attribute information, \(A\) and \(X\). **Model overview.** Figure 2 provides an overview of our link prediction model. It starts with a topology-centric graph learning phase that incorporates node attribute information directly into the graph structure via a Multi-layer Perceptron (MLP). We then apply a topological heuristic, Autocovariance (AC), to the attribute-enhanced graph to obtain a pairwise score matrix. Node pairs with the highest scores are predicted as (positive) links. The scores for training pairs are collected to compute an N-pair loss. Finally, the loss is used to train the MLP parameters in an end-to-end manner. We named our model Gelato (Graph enhancement for link prediction with autocovariance). Gelato represents a paradigm shift in supervised link prediction by combining a graph encoding of attributes with a topological heuristic instead of relying on increasingly popular GNN-based embeddings. ### Graph learning The goal of graph learning is to generate an enhanced graph that incorporates node attribute information into the topology. This can be considered as the "dual" operation of message-passing in GNNs, which incorporates topological information into attributes (embeddings). We argue that graph learning is the more suitable scheme to combine attributes and topology for link prediction, since link prediction is a task for the topology itself (as opposed to other applications such as node classification). Specifically, our first step of graph learning is to augment the original edges with a set of node pairs based on their (untrained) attribute similarity (i.e., adding an \(\epsilon\)-neighborhood graph): \[\widetilde{E}=E+\{(u,v)\mid s(x_{u},x_{v})>\epsilon_{\eta}\} \tag{1}\] where \(s(\cdot)\) can be any similarity function (we use cosine in our experiments) and \(\epsilon_{\eta}\) is a threshold that determines the number of added pairs as a ratio \(\eta\) of the original number of edges \(m\). A simple MLP then maps the pairwise node attributes into a trained edge weight for every edge in \(\widetilde{E}\): \[w_{uw}=\operatorname{MLP}([x_{u};x_{v}];\theta) \tag{2}\] where \([x_{u};x_{v}]\) denotes the concatenation of \(x_{u}\) and \(x_{v}\) and \(\theta\) contains the trainable parameters. For undirected graphs, we instead use the following permutation invariant operator (Garno et al., 2016): \[w_{uw}=\operatorname{MLP}([x_{u}+x_{v};|x_{u}-x_{v}|];\theta) \tag{3}\] The final edge weights of the enhanced graph are a weighted combination of the topological weights, the untrained weights, and the trained weights: \[\widetilde{A}_{uw}=\alpha A_{uw}+(1-\alpha)(\beta w_{uw}+(1-\beta)s(x_{u},x_{v })) \tag{4}\] where \(\alpha\) and \(\beta\) are hyperparameters. The enhanced adjacency matrix \(\widetilde{A}\) is then fed into a topological heuristic for link prediction introduced in the next section. Note that the MLP is not trained directly to predict the links, but instead trained end-to-end to enhance the input graph given to the topological heuristic. Also note that the MLP can be easily replaced by a more powerful model such as a GNN, but the goal of this paper is to demonstrate the general effectiveness of our framework and we will show that even a simple MLP leads to significant improvement over the base heuristic. ### Topological heuristic Assuming that the learned adjacency matrix \(\widetilde{A}\) incorporates structural and attribute information, Gelato applies a topological heuristic to \(\widetilde{A}\). Specifically, we adopt Autocovariance, which has been shown to achieve state-of-the-art link prediction results for non-attributed graphs (Kal heuristics implicitly use information from all edges and non-edges. Also, _unbiased training_ can be combined with adversarial negative sampling methods (Kumar et al., 2017; Zhang et al., 2017) from the knowledge graph embedding literature to increase the quality of contrasted negative pairs. **Complexity analysis.** The only trainable component in our model is the graph learning MLP with \(O(rh+lh^{2})\) parameters--where \(r\) is the number of node features, \(l\) is the number of hidden layers, and \(h\) is the number of neurons per layer. Notice that the number of parameters is independent of the graph size. Constructing the \(\epsilon\)-neighborhood graph based on cosine similarity can be done efficiently using hashing and pruning (Kumar et al., 2017; Zhang et al., 2017). Computing the enhanced adjacency matrix with the MLP takes \(O((1+\eta)mr)\) time per epoch--where \(m=|E|\) and \(\eta\) is the ratio of edges added to \(E\) from the \(\epsilon\)-neighborhood graph. We apply sparse matrix multiplication to compute \(k\) entries of the \(t\)-step AC in \(O(\max(k,(1+\eta)mt))\) time. Note that unlike recent GNN-based approaches (Zhu et al., 2017; Zhang et al., 2017; Zhang et al., 2017) that generate distinctive subgraphs for each link (e.g., via the labeling trick), enclosing subgraphs for links in Gelato share the same information (i.e., learned edge weights), which significantly reduces the computational cost. Our experiments will demonstrate Gelato's efficiency in training and inference. ## 4. Experiments We provide empirical evidence for our claims regarding supervised link prediction and demonstrate the accuracy and efficiency of Gelato. Our implementation is anonymously available at [https://anonymous.4open.science/r/Gelato/](https://anonymous.4open.science/r/Gelato/). ### Experiment settings **Datasets.** Our method is evaluated on five attributed graphs commonly used as link prediction benchmark (Kumar et al., 2017; Zhang et al., 2017; Zhang et al., 2017; Zhang et al., 2017; Zhang et al., 2017; Zhang et al., 2017). Table 1 shows dataset statistics--see Appendix B for dataset details. **Baselines.** For GNN-based link prediction, we include six state-of-the-art methods published in the past two years: LGCN (Kumar et al., 2017), TLC-GNN (Kumar et al., 2017), Neo-GNN (Kumar et al., 2017), NBFNet (Kumar et al., 2017), BScNets (Kumar et al., 2017), and WalkPool (Zhang et al., 2017), as well as three pioneering works--GAE (Kumar et al., 2017), SEAL (Kumar et al., 2017), and HGCN (Kumar et al., 2017). For topological link prediction heuristics, we consider Common Neighbors (CN) (Zhu et al., 2017), Adamic Adar (AA) (Beng et al., 2016), Resource Allocation (RA) (Kumar et al., 2017), and Autocovariance (AC) (Kumar et al., 2017)--the base heuristic in our model. To demonstrate the superiority of the proposed end-to-end model, we also include an MLP trained directly for link prediction, the cosine similarity (Cos) between node attributes, and AC on top of the respective weighted/augmented graphs (i.e., two-stage approaches where the MLP is trained separately for link prediction rather than trained end-to-end) as baselines. **Hyperparameters.** For Gelato, we tune the proportion of added edges \(\eta\) from {0.0, 0.25, 0.5, 0.75, 1.0}, the topological weight \(\alpha\) from {0.0, 0.25, 0.5, 0.75}, and the trained weight \(\beta\) from {0.25, 0.5, 0.75}, 1.0}, with a sensitivity analysis included in Section 4.6. All other settings are fixed across datasets: MLP with one hidden layer of 128 neurons, AC scaling parameter \(t=3\), Adam optimizer (Kingmaa et al., 2014) with a learning rate of 0.001, a dropout rate of 0.5, and _unbiased training_ without downsampling. For baselines, we use the same hyperparameters as in their papers. **Data splits for unbiased training and unbiased testing.** Following (Kumar et al., 2017; Zhang et al., 2017; Zhang et al., 2017; Zhang et al., 2017; Zhang et al., 2017), we adopt 85%/5%/10% ratios for training, validation, and testing. Specifically, for _unbiased training_ and _testing_, we first randomly (with seed 0) divide the (positive) edges \(E\) of the original graph into \(E^{+}_{train}\), \(E^{+}_{valid}\), and \(E^{+}_{test}\) for training, validation, and testing based on the selected ratios. Then, we set the negative pairs in these three sets as (1) \(E^{-}_{train}=E^{-}+E^{+}_{valid}+E^{+}_{test}\), (2) \(E^{-}_{valid}=E^{-}+E^{+}_{test}\), and (3) \(E^{-}_{test}=E^{-}\), where \(E^{-}\) is the set of all negative pairs (excluding self-loops) in the original graph. Notice that the validation and testing _positive_ edges are included in the _negative_ training set, and the testing _positive_ edges are included \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \#Nodes & \#Edges & \#Attrs & Avg. degree & Density \\ \hline Cora & 2,708 & 5,278 & 1,433 & 3.90 & 0.14\% \\ CiteSeer & 3,327 & 4,552 & 3,703 & 2.74 & 0.08\% \\ PubMed & 19,717 & 44,324 & 500 & 4.50 & 0.02\% \\ Photo & 7,650 & 119,081 & 745 & 31.13 & 0.41\% \\ Computers & 13,752 & 245,861 & 767 & 35.76 & 0.26\% \\ \hline \hline \end{tabular} \end{table} Table 1. A summary of dataset statistics. Figure 2. Gelato applies graph learning to incorporate attribute information into the topology via an MLP. The learned graph is given to a topological heuristic that predicts edges between node pairs with high Autocovariance similarity. The parameters of the MLP are optimized end-to-end using the N-pair loss. Experiments show that Gelato outperforms state-of-the-art GNN-based link prediction methods. in the _negative_ validation set. These inclusions simulate the real-world scenario where the testing edges (and the validation edges) are unobserved during validation (training). **Evaluation metrics.** We adopt Precision@\(k\) (\(prec@k\))--proportion of positive edges among the top \(k\) of all testing pairs, Hits@\(k\) (\(hits@k\))--ratio of positive edges individually ranked above \(k\)th place against all negative pairs, and Average Precision (AP)--area under the precision-recall curve, as evaluation metrics. We report results from 10 runs with random seeds ranging from 1 to 10. More detailed experiment settings can be found in Appendix C. ### Link prediction performance Table 2 summarizes the link prediction performance in terms of the mean and standard deviation of Average Precision (AP) for all methods. Figure 3 and Figure 4 show results based on \(prec@k\) (\(k\) as a ratio of test edges) and \(hits@k\) (\(k\) as the rank) for varying \(k\). First, we want to highlight the drastically different performance of GNN-based methods compared to those found in the original papers [13; 55; 81; 88; 91; 97] and reproduced in Appendix D. While they achieve AUC/AP scores of often higher than 90% under _biased testing_, here we see most of them underperform even the simplest topological heuristics such as Common Neighbors under _unbiased testing_. These results support our arguments from Section 2 that evaluation metrics based on _biased testing_ can produce misleading results compared to _unbiased testing_. The overall best performing GNN model is Neo-GNN, which directly generalizes the pairwise topological heuristics. Yet still, it consistently underperforms AC, a random-walk based heuristic that needs neither node attributes nor supervision/training. We then look at two-stage combinations of AC and models for attribute information. We observe noticeable performance gains from combining attribute cosine similarity and AC in Cora and CiteSeer but not for other datasets. Other two-stage approaches achieve similar or worse performance. Finally, Gelato significantly outperforms the best GNN-based model with an average relative gain of **145.2%** and AC with a gain of **52.6%** in terms of AP--similar results were obtained for \(prec@k\) and \(hits@k\). This validates our hypothesis that a simple MLP can effectively incorporate node attribute information into the topology when our model is trained end-to-end. Next, we will provide insights behind these improvements and demonstrate the efficiency of Gelato on training and inference. ### Visualizing Gelato predictions To better understand the performance of Gelato, we visualize its learned graph, prediction scores, and predicted edges in comparison with AC and the best GNN-based baseline (Neo-GNN) in Figure 5. Figure 4(a) shows the input adjacency matrix for the subgraph of Photo containing the top 160 nodes belonging to the first class sorted in decreasing order of their (within-class) degree. Figure 4(b) illustrates the enhanced adjacency matrix learned by Gelato's MLP. Comparing it with the Euclidean distance between node attributes in Figure 4(c), we see that our enhanced adjacency matrix effectively incorporates attribute information. More specifically, we notice the down-weighting of the edges connecting the high-degree nodes with larger attribute distances (matrix entries 0-40 and especially 0-10) and the up-weighting of those connecting medium-degree nodes with smaller attribute distances (40-80). In Figure 4(d) and Figure 4(e), we see the corresponding AC scores based on the input and the enhanced adjacency matrix (Gelato). Since AC captures the degree distribution of nodes [27], the vanilla AC scores greatly favor high-degree nodes (0-40). By comparison, thanks to the down-weighting, Gelato assigns relatively lower scores to edges connecting them to low-degree nodes (60-160), while still capturing the edge density between high-degree nodes (0-40). The immediate result of this is the significantly improved precision as shown in Figure 4(h) compared to Figure 4(g). Gelato covers as many positive edges in the high-degree region as AC while making far fewer wrong predictions for connections involving low-degree nodes. The prediction probabilities and predicted edges for Neo-GNN are shown in Figure 4(f) and Figure 4(i), respectively. Note that while it predicts edges connecting high-degree node pairs (0-40) with high probability, similar values are assigned to many low-degree pairs (80-160) as well. Most of those predictions are wrong, both in the low-degree region of Figure 4(i) and also in other low-degree parts of the graph that are not shown here. This analysis supports our claim that Gelato is more effective at combining node attributes and the graph topology, enabling state-of-the-art link prediction. ### Loss and training setting In this section, we demonstrate the advantages of the proposed N-pair loss and _unbiased training_ for supervised link prediction. Figure 6 shows the training and validation losses and \(prec@100\)% (our validation metric) in training Gelato based on the cross entropy (CE) and N-pair (NP) losses under _biased_ and _unbiased training_ respectively. The final test AP scores are shown in the titles. In the first column (CE with _biased training_), different from the training, both loss and precision for (unbiased) validation decrease. This leads to even worse test performance compared to the untrained base model (i.e., AC). In other words, albeit being the most popular loss function for supervised link prediction, CE with _biased training_ does not generalize to _unbiased testing_. On the contrary, as shown in the second column, the proposed NP loss with _biased training_--equivalent to the pairwise logistic loss [7]--is a more effective proxy for _unbiased testing_ metrics. The right two columns show results with _unbiased training_, which is better for CE as more negative pairs are present in the training set (with the original class ratio). On the other hand, NP is more consistent with unbiased evaluation metrics, leading to 8.5% better performance. This is because, unlike CE, which optimizes positive and negative pairs independently, NP contrasts negative pairs against positive ones, giving higher importance to negative pairs that are more likely to be false positives. ### Ablation study We have demonstrated the superiority of Gelato over its individual components and two-stage approaches in Table 2 and analyzed the effect of losses and training settings in Section 4.4. Here, we collect the results with the same hyperparameter setting as Gelato and present a comprehensive ablation study in Table 3. Specifically, \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & & Cora & CiteSeer & PubMed & Photo & Computers \\ \hline \multirow{8}{*}{GNN} & GAE & 0.27 \(\pm\) 0.02 & 0.66 \(\pm\) 0.11 & 0.26 \(\pm\) 0.03 & 0.28 \(\pm\) 0.02 & 0.30 \(\pm\) 0.02 \\ & SEAL & 1.89 \(\pm\) 0.74 & 0.91 \(\pm\) 0.66 & *** & 10.49 \(\pm\) 0.86 & 6.84\({}^{*}\) \\ & HGCN & 0.82 \(\pm\) 0.03 & 0.74 \(\pm\) 0.10 & 0.35 \(\pm\) 0.01 & 2.11 \(\pm\) 0.10 & 2.30 \(\pm\) 0.14 \\ & LGCN & 1.14 \(\pm\) 0.04 & 0.86 \(\pm\) 0.09 & 0.44 \(\pm\) 0.01 & 3.53 \(\pm\) 0.05 & 1.96 \(\pm\) 0.03 \\ & TLC-GNN & 0.29 \(\pm\) 0.09 & 0.35 \(\pm\) 0.18 & OOM & 1.77 \(\pm\) 0.11 & OOM \\ & Neo-GNN & 2.05 \(\pm\) 0.61 & 1.61 \(\pm\) 0.36 & 1.21 \(\pm\) 0.14 & 10.83 \(\pm\) 1.53 & 6.75\({}^{*}\) \\ & NBFNet & 1.36 \(\pm\) 0.17 & 0.77 \(\pm\) 0.22 & *** & 11.99 \(\pm\) 1.60 & *** \\ & BScNets & 0.32 \(\pm\) 0.08 & 0.20 \(\pm\) 0.06 & 0.22 \(\pm\) 0.08 & 2.47 \(\pm\) 0.18 & 1.45 \(\pm\) 0.10 \\ & WalkPool & 2.04 \(\pm\) 0.07 & 1.39 \(\pm\) 0.11 & 1.31\({}^{*}\) & OOM & OOM \\ \hline \multirow{8}{*}{Topological} & CN & 1.10 \(\pm\) 0.00 & 0.74 \(\pm\) 0.00 & 0.36 \(\pm\) 0.00 & 7.73 \(\pm\) 0.00 & 5.09 \(\pm\) 0.00 \\ & AA & 2.07 \(\pm\) 0.00 & 1.24 \(\pm\) 0.00 & 0.45 \(\pm\) 0.00 & 9.67 \(\pm\) 0.00 & 6.52 \(\pm\) 0.00 \\ \cline{1-1} & RA & 2.02 \(\pm\) 0.00 & 1.19 \(\pm\) 0.00 & 0.33 \(\pm\) 0.00 & 10.77 \(\pm\) 0.00 & 7.71 \(\pm\) 0.00 \\ \cline{1-1} & AC & 2.43 \(\pm\) 0.00 & 2.65 \(\pm\) 0.00 & 2.50 \(\pm\) 0.00 & 16.63 \(\pm\) 0.00 & 11.64 \(\pm\) 0.00 \\ \hline \multirow{8}{*}{Attributes + Topology} & MLP & 0.30 \(\pm\) 0.05 & 0.44 \(\pm\) 0.09 & 0.14 \(\pm\) 0.06 & 1.01 \(\pm\) 0.26 & 0.41 \(\pm\) 0.23 \\ & Cos & 0.42 \(\pm\) 0.00 & 1.89 \(\pm\) 0.00 & 0.07 \(\pm\) 0.00 & 0.11 \(\pm\) 0.00 & 0.07 \(\pm\) 0.00 \\ \cline{1-1} & MLP+AC & 3.24 \(\pm\) 0.03 & 1.95 \(\pm\) 0.05 & 2.61 \(\pm\) 0.06 & 15.99 \(\pm\) 0.21 & 11.25 \(\pm\) 0.13 \\ \cline{1-1} & Cos+AC & 3.60 \(\pm\) 0.00 & 4.46 \(\pm\) 0.00 & 0.51 \(\pm\) 0.00 & 10.01 \(\pm\) 0.00 & 5.20 \(\pm\) 0.00 \\ \cline{1-1} & MLP+Cos+AC & 3.39 \(\pm\) 0.06 & 4.15 \(\pm\) 0.14 & 0.55 \(\pm\) 0.03 & 10.88 \(\pm\) 0.09 & 5.75 \(\pm\) 0.11 \\ \hline \multicolumn{1}{c}{Gelato} & **3.90 \(\pm\) 0.03** & **4.55 \(\pm\) 0.02** & **2.88 \(\pm\) 0.09** & **25.68 \(\pm\) 0.53** & **18.77 \(\pm\) 0.19** \\ \hline \multicolumn{1}{c}{\({}^{*}\) Run only once as each run takes \(-\)100 hrs;} & *** Each run takes \(>\)1000 hrs; & OOM: Out Of Memory. & & & \\ \multicolumn{1}{c}{\({}^{*}\) Run only once as each run takes \(-\)100 hrs;} & *** Each run takes \(>\)1000 hrs; & OOM: Out Of Memory. & & & \\ \multicolumn{1}{c}{\({}^{*}\) Run only once as each run takes \(-\)100 hrs;} & *** Each run takes \(>\)1000 hrs; & OOM: Out Of Memory. & & & \\ \multicolumn{1}{c}{\({}^{*}\) Run only once as each run takes \(-\)100 hrs;} & *** Each run takes \(>\)1000 hrs; & OOM: Out Of Memory. & & & \\ \multicolumn{1}{c}{\({}^{*}\) Run only once as each run takes \(-\)100 hrs;} & *** Each run takes \(>\)1000 hrs; & OOM: Out Of Memory. & & & \\ \multicolumn{1}{c}{\({}^{*}\) Run only once as each run takes \(-\)100 hrs;} & *** Each run takes \(>\)1000 hrs; & OOM: Out Of Memory. & & & \\ \multicolumn{1}{c}{\({}^{*}\) Run only once as each run takes \(-\)100 hrs;} & *** Each run takes \(>\)1000 hrs; & OOM: Out Of Memory. & & & \\ \multicolumn{1}{c}{\({}^{*}\) Run only once as each run takes \(-\)100 hrs;} & *** Each run takes \(>\)1000 hrs; & OOM: Out Of Memory. & & & \\ \multicolumn{1}{c}{\({}^{*}\) Run only once as each run takes \(-\)100 hrs;} & *** Each run takes \(>\)1000 hrs; & OOM: Out Of Memory. & & & \\ \multicolumn{1}{c}{\({}^{*}\) Run only once as each run takes \(-\)100 hrs;} & *** Each run takes \(>\)1000 hrs; & OOM: Out Of Memory. & & & \\ \multicolumn{1}{c}{\({}^{*}\) Run only once as each run takes \(-\)100 hrs;} & *** Each run takes \(>\)1000 hrs; & OOM: Out Of Memory. & & & \\ \multicolumn{1}{c}{\({}^{*}\) Run only once as each run takes \(-\)100 hrs;} & *** Each run takes \(>\)1000 hrs; & OOM: Out Of Memory. & & & \\ \multicolumn{1}{c}{\({}^{*}\) Run only once as each run takes \(-\)100 hrs;} & *** Each run takes \(>\)1000 hrs; & OOM: Out Of Memory. & & & \\ \multicolumn{1}{c}{\({}^{*}\) Run only once as each run takes \(-\)100 hrs;} & *** Each run takes \(>\)1000 hrs; & OOM: Out Of Memory. & & & \\ \multicolumn{1}{c}{\({}^{*}\) Run only once as each run takes \(-\)100 hrs;} & *** Each run takes \(>\)1000 hrs; & OOM: Out Of Memory. & & & \\ \multicolumn{1}{c}{\({}^{*}\) Run only once as each run takes \(-\)100 hrs;} & *** Each run takes \(>\)1000 hrs; & OOM: Out Of Memory. & & & & \\ \multicolumn{1}{c}{\({}^{*}\) Run only once as each run takes \(-\)100 hrs;} & *** Each run takes \(>\)1000 hrs; & OOM: Out Of Memory. Figure 5: Illustration of the link prediction process of Gelato, AC, and the best GNN-based approach, Neo-GNN, on a subgraph of Photo. Gelato effectively incorporates node attributes into the graph structure and leverages topological heuristics to enable state-of-the-art link prediction. _Gelato_\(-MLP\) (\(AC\)) represents Gelato without the MLP (Autocovariance) component, i.e., only using Autocovariance (MLP) for link prediction. _Gelato_\(-NP\) (\(UT\)) replaces the proposed N-pair loss (_unbiased training_) with the cross entropy loss (_biased training_) applied by the baselines. Finally, _Gelato_\(-NP\)+\(UT\) replaces both the loss and the training setting. We observe that removing either MLP or Autocovariance leads to inferior performance, as the corresponding attribute or topology information would be missing. Further, to address the class imbalance problem of link prediction, both the N-pair loss and _unbiased training_ are crucial for effective training of Gelato. While all supervised baselines originally adopt _biased training_, we also implement the same _unbiased training_ (and N-pair loss) as Gelato for those that are scalable in Appendix E\(-\)results are consistent with the ones discussed in Section 4.2. ### Sensitivity analysis The selected hyperparameters of Gelato for each dataset are recorded in Table 4, and a sensitivity analysis of \(\eta\), \(\alpha\), and \(\beta\) are shown in Figure 7 and Figure 8 respectively for Photo and Cora. For most datasets, we find that simply setting \(\beta=1.0\) and \(\eta=\alpha=0.0\) leads to the best performance, corresponding to the scenario where no edges based on cosine similarity are added and the edge weights are completely learned by the MLP. For Cora and CiteSeer, however, we first notice that adding edges based on untrained cosine similarity alone leads to improved performance (see Table 2), which motivates us to set \(\eta=0.5/0.75\). In addition, we find that a large trainable weight \(\beta\) leads to overfitting of the model as the number of node attributes is large while the number of (positive) edges is small for Cora and CiteSeer (see Table 1). We address this by decreasing the relative importance of trained edge weights (\(\beta=0.25/0.5\)) and increasing that of the topological edge weights (\(\alpha=0.5\)), which leads to better generalization and improved performance. Based on our experiments, these hyperparameters can be easily tuned using simple hyperparameter search techniques, such as line search, using a small validation set. ### Running time comparison We compare Gelato with other supervised link prediction methods in terms of running time for Photo (see details in Appendix F). As the only method that applies _unbiased training_\(-\)with more negative samples\(-\)Gelato shows a competitive training speed that is 11\(\times\) faster than the best performing GNN-based method, Neo-GNN. In terms of inference time, Gelato is much faster than most baselines with a 6,000\(\times\) speedup compared to Neo-GNN. We further observe more significant efficiency gains for Gelato over Neo-GNN for larger datasets\(-\)e.g., 14\(\times\) (training) and 25,000\(\times\) (testing) for Computers. ## 5. Related Work **Topological heuristics for link prediction.** The early link prediction literature focuses on topology-based heuristics. This includes approaches based on local (e.g., Common Neighbors (Srivastava et al., 2014), Adamic Figure 6. Training of Gelato based on different losses and training settings for Photo with test AP (mean \(\pm\) std) shown in the titles. Compared with the cross entropy loss, the N-pair loss with _unbiased training_ is a more consistent proxy for _unbiased testing_ metrics and leads to better peak performance. Figure 7. Performance of Gelato with different values of \(\eta\). Adar [(1)], and Resource Allocation [(96)]) and higher-order (e.g., Katz [(32)], PageRank [(54)], and SimRank [(31)]) information. More recently, random-walk based graph embedding methods, which learn vector representations for nodes [(22; 27; 57)], have achieved promising results in graph machine learning tasks. Popular embedding approaches, such as DeepWalk [(57)] and node2vec [(22)], have been shown to implicitly approximate the Pointwise Mutual Information similarity [(59)], which can also be used as a link prediction heuristic. This has motivated the investigation of other similarity metrics such as Autocovariance [(17; 27; 28)]. However, these heuristics are unsupervised and cannot take advantage of data beyond the topology. **Graph Neural Networks for link prediction.** GNN-based link prediction addresses the limitations of topological heuristics by training a neural network to combine topological and attribute information and potentially learn new heuristics. GAE [(34)] combines a graph convolution network [(35)] and an inner product decoder based on node embeddings for link prediction. SEAL [(89)] models link prediction as a binary subgraph classification problem (edge/non-edge), and follow-up work (e.g., SHHF [(42)], WalkPool [(55)]) investigates different pooling strategies. Other recent approaches for GNN-based link prediction include learning representations in hyperbolic space (e.g., HGCN [(11)], LGCN [(91)]), generalizing topological heuristics (e.g., Neo-GNN [(88)], NBFNet [(97)]), and incorporating additional topological features (e.g., TLC-GNN [(81)], BScNets [(13)]). Motivated by the growing popularity of GNNs for link prediction, this work investigates key questions regarding their training, evaluation, and ability to learn effective topological heuristics directly from data. We propose Gelato, which is simpler, more accurate, and faster than the state-of-the-art GNN-based link prediction methods. **Graph learning.** Gelato learns a graph that combines topological and attribute information. Our goal differs from generative models [(23; 40; 87)], which learn to sample from a distribution over graphs. Graph learning also enables the application of GNNs when the graph is unavailable, noisy, or incomplete (for a recent survey, see [(93)]). LDS [(19)] and GAug [(94)] jointly learn a probability distribution over edges and GNN parameters. IDGL [(14)] and EGLN [(83)] alternate between optimizing the graph and embeddings for node/graph classification and collaborative filtering. [(69)] proposes two-stage link prediction by augmenting the graph as a preprocessing step. In comparison, Gelato effectively learns a graph in an end-to-end manner by minimizing the loss of a topological heuristic. ## 6. Conclusion This work sheds light on key limitations in how GNN-based link prediction methods handle the intrinsic class imbalance of the problem and presents more effective and efficient ways to combine attributes and topology. Our findings might open new research directions on machine learning for graph data, including a better understanding of the advantages of increasingly popular and sophisticated deep learning models compared to more traditional and simpler graph-based solutions. ###### Acknowledgements. This work is partially funded by NSF via grant IIS 1817046 and DTRA via grant HDTRA1-19-1-0017. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & Cora & CiteSeer & PubMed & Photo & Computers \\ \hline \(\eta\) & 0.5 & 0.75 & 0.0 & 0.0 & 0.0 \\ \(\alpha\) & 0.5 & 0.5 & 0.0 & 0.0 & 0.0 \\ \(\beta\) & 0.25 & 0.5 & 1.0 & 1.0 & 1.0 \\ \hline \hline \end{tabular} \end{table} Table 4. Selected hyperparameters of Gelato. Figure 8. Performance of Gelato with different \(\alpha\) and \(\beta\). \begin{table} \begin{tabular}{l c c c c c} \hline \hline & Cora & CiteSeer & PubMed & Photo & Computers \\ \hline _Gelato–MLP_ & 2.43 \(\pm\) 0.00 & 2.65 \(\pm\) 0.00 & 2.50 \(\pm\) 0.00 & 16.63 \(\pm\) 0.00 & 11.64 \(\pm\) 0.00 \\ _Gelato–AC_ & 1.94 \(\pm\) 0.18 & 3.91 \(\pm\) 0.37 & 0.83 \(\pm\) 0.05 & 7.45 \(\pm\) 0.44 & 4.09 \(\pm\) 0.16 \\ _Gelato–NP+UT_ & 2.98 \(\pm\) 0.20 & 1.96 \(\pm\) 0.11 & 2.35 \(\pm\) 0.24 & 14.87 \(\pm\) 1.41 & 9.77 \(\pm\) 2.67 \\ _Gelato–NP_ & 1.96 \(\pm\) 0.01 & 1.77 \(\pm\) 0.20 & 2.32 \(\pm\) 0.16 & 19.63 \(\pm\) 0.38 & 9.84 \(\pm\) 4.42 \\ _Gelato–UT_ & 3.07 \(\pm\) 0.01 & 1.95 \(\pm\) 0.05 & 2.52 \(\pm\) 0.09 & 23.66 \(\pm\) 1.01 & 11.59 \(\pm\) 0.35 \\ _Gelato_ & **3.90 \(\pm\) 0.03** & **4.55 \(\pm\) 0.02** & **2.88 \(\pm\) 0.09** & **25.68 \(\pm\) 0.53** & **18.77 \(\pm\) 0.19** \\ \hline \hline \end{tabular} \end{table} Table 3. Results of the ablation study based on AP scores. Each component of Gelato plays an important role in enabling state-of-the-art link prediction performance.
2310.19324
TempME: Towards the Explainability of Temporal Graph Neural Networks via Motif Discovery
Temporal graphs are widely used to model dynamic systems with time-varying interactions. In real-world scenarios, the underlying mechanisms of generating future interactions in dynamic systems are typically governed by a set of recurring substructures within the graph, known as temporal motifs. Despite the success and prevalence of current temporal graph neural networks (TGNN), it remains uncertain which temporal motifs are recognized as the significant indications that trigger a certain prediction from the model, which is a critical challenge for advancing the explainability and trustworthiness of current TGNNs. To address this challenge, we propose a novel approach, called Temporal Motifs Explainer (TempME), which uncovers the most pivotal temporal motifs guiding the prediction of TGNNs. Derived from the information bottleneck principle, TempME extracts the most interaction-related motifs while minimizing the amount of contained information to preserve the sparsity and succinctness of the explanation. Events in the explanations generated by TempME are verified to be more spatiotemporally correlated than those of existing approaches, providing more understandable insights. Extensive experiments validate the superiority of TempME, with up to 8.21% increase in terms of explanation accuracy across six real-world datasets and up to 22.96% increase in boosting the prediction Average Precision of current TGNNs.
Jialin Chen, Rex Ying
2023-10-30T07:51:41Z
http://arxiv.org/abs/2310.19324v1
# TempME: Towards the Explainability of Temporal Graph Neural Networks via Motif Discovery ###### Abstract Temporal graphs are widely used to model dynamic systems with time-varying interactions. In real-world scenarios, the underlying mechanisms of generating future interactions in dynamic systems are typically governed by a set of recurring substructures within the graph, known as temporal motifs. Despite the success and prevalence of current temporal graph neural networks (TGNN), it remains uncertain which temporal motifs are recognized as the significant indications that trigger a certain prediction from the model, which is a critical challenge for advancing the explainability and trustworthiness of current TGNNs. To address this challenge, we propose a novel approach, called **T**emporal **M**otifs **E**xplainer (TempME), which uncovers the most pivotal temporal motifs guiding the prediction of TGNNs. Derived from the information bottleneck principle, TempME extracts the most interaction-related motifs while minimizing the amount of contained information to preserve the sparsity and succinctness of the explanation. Events in the explanations generated by TempME are verified to be more spatiotemporally correlated than those of existing approaches, providing more understandable insights. Extensive experiments validate the superiority of TempME, with up to \(8.21\%\) increase in terms of explanation accuracy across six real-world datasets and up to \(22.96\%\) increase in boosting the prediction Average Precision of current TGNNs.1 Footnote 1: The code is available at [https://github.com/Graph-and-Geometric-Learning/TempME](https://github.com/Graph-and-Geometric-Learning/TempME) ## 1 Introduction Temporal Graph Neural Networks (TGNN) are attracting a surge of interest in real-world applications, such as social networks, financial prediction, _etc_. These models exhibit the ability to capture both the topological properties of graphs and the evolving dependencies between interactions over time [1; 2; 3; 4; 5; 6; 7; 8]. Despite their widespread success, these models often lack transparency, functioning as black boxes. The provision of human-intelligible explanations for these models becomes imperative, enabling a better understanding of their decision-making logic and justifying the rationality behind their predictions. Therefore, improving explainability is fundamental in enhancing the trustworthiness of current TGNNs, making them reliable for deployment in real-world scenarios, particularly in high-stakes tasks like fraud detection and healthcare forecasting [9; 10; 11; 12]. The goal of explainability is to discover what patterns in data have been recognized that trigger certain predictions from the model. Explanation approaches on static graph neural networks have been well-studied recently [13; 14; 15; 16; 17; 18; 19]. These methods identify a small subset of important edges or nodes that contribute the most to the model's prediction. However, the success of these methods on static graphs cannot be easily generalized to the field of temporal graphs, due to the complex and volatile nature of dynamic networks [8; 4; 3]. Firstly, there can be duplicate events occurring at the same timestamp and the same position in temporal graphs. The complicated dependencies between interactions were under-emphasized by existing explanation approaches [20; 21]. Moreover, the important events should be temporally proximate and spatially adjacent to construct a human-intelligible explanation [22]. We refer to explanations that satisfy these requirements as _cohesive_ explanations. As illustrated in Figure 1(a), a non-cohesive explanation typically consists of scattered events (highlighted in purple). For instance, event \(1\) and event \(10\) in the disjointed explanation are neither temporally proximate nor spatially adjacent to other explanatory events, leading to a sub-optimal explanation and degrading the inspiration that explanations could bring us. There have been some recent attempts at TGNN explainability [21; 23]. Unfortunately, they all face the critical challenge of generating _cohesive_ explanations and fall short of providing human-intelligible insights. Moreover, they entail high computational costs, making them impractical for real-world deployment. To address the aforementioned challenges of temporal explanations, we propose to utilize temporal motifs in the explanation task. Temporal motifs refer to recurring substructures within the graph. Recent studies [24; 25; 26; 27; 28; 29; 30; 31] demonstrate that these temporal motifs are essential factors that control the generative mechanisms of future events in real-world temporal graphs and dynamic systems. For example, preferential attachment (Figure 1(c)) elucidates the influence effect in e-commerce marketing graphs [32; 33]. Triadic closure (Figure 1(d)) explains the common-friend rules in social networks [34; 6; 1; 25]. Therefore, they are plausible and reliable composition units to explain TGNN predictions. Moreover, the intrinsic self-connectivity of temporal motifs guarantees the _cohesive_ property of the generated explanations (Figure 1(b)). **Proposed work.** In the present work, we propose **TempME**, a novel **Tem**poral **M**otif-based **E**xplainer to identify the most determinant temporal motifs to explain the reasoning logic of temporal GNNs and justify the rationality of the predictions. TempME leverages historical events to train a generative model that captures the underlying distribution of explanatory motifs and thereby improves the explanation efficiency. TempME is theoretically grounded by Information Bottleneck (IB), which finds the best tradeoff between explanation accuracy and compression. To utilize Information Bottleneck in the context of temporal graphs, we incorporate a null model (_i.e.,_ a randomized reference) [22; 35; 36] into the model to better measure the information contained in the generated explanations. Thereby, TempME is capable of telling for each motif how the occurrence frequency difference in empirical networks and randomized reference reflects the importance to the model predictions. Different from previous works that only focus on the effect of singular events [23; 21], TempME is the first to bring additional knowledge about the effect of each temporal motif. We evaluate TempME with three popular TGNN backbones, TGAT [3], TGN [4] and GraphMixer [5]. Extensive experiments demonstrate the superiority and efficiency of TempME in explaining the prediction behavior of these TGNNs and the potential in enhancing the prediction performance of TGNNs, achieving up to \(8.21\%\) increase in terms of explanation accuracy across six real-world datasets and up to \(22.96\%\) increase in boosting the prediction Average Precision of TGNNs. The **contributions** of this paper are: (1) We are the first to utilize temporal motifs in the field of explanations for TGNNs to provide more insightful explanations; (2) We further consider the null model in the information bottleneck principle for the temporal explanation task; (3) The discovered temporal motifs not only explain the predictions of different TGNNs but also exhibit ability in enhancing their link prediction performance. ## 2 Related Work GNN ExplainabilityExplainability methods for Graph Neural Networks can be broadly classified into two categories: non-generative and generative methods. Given an input instance with its Figure 1: (a) and (b): Non-cohesive explanation and cohesive explanation (highlighted in colors). (c) and (d): Temporal motifs govern the generation of future interactions (numbers denote event orders). prediction, non-generative methods typically utilize gradients [37; 15], perturbations [38; 39], relevant walks [40], mask optimization [13], surrogate models [41], and Monte Carlo Tree Search (MCTS) [16] to search the explanation subgraph. These methods optimize the explanation one by one during the explanation stage, leading to a longer inference time. On the contrary, generative methods train a generative model across the entire dataset by learning the distribution of the underlying explanatory subgraphs [42; 43; 44; 19; 18; 45; 42; 41], which obtains holistic knowledge of the model behavior over the whole dataset. Compared with static GNN, the explainability of temporal graph neural networks (TGNNs) remain challenging and under-explored. TGNNExplainer [23] is the first explainer tailored for temporal GNNs, which relies on the MCTS algorithm to search for a combination of the explanatory events. Recent work [21] utilizes the probabilistic graphical model to generate explanations for discrete time series on the graph, leaving the continuous-time setting under-explored. However, these methods cannot guarantee cohesive explanations and require significant computation costs. There are also some works that have considered intrinsic interpretation in temporal graphs [26] and seek the self-interpretable models [46; 20]. As ignored by previous works on temporal explanation, we aim for cohesive explanations that are human-understandable and insightful in a generative manner for better efficiency during the explanation stage. Network MotifsThe concept of network motifs is defined as recurring and significant patterns of interconnections [35], which are building blocks for complex networks [47; 24]. Kovanen et al. [22] proposed the first notion of temporal network motifs with edge timestamps, followed by relaxed versions to involve more diverse temporal motifs [48; 24; 49]. Early efforts developed efficient motif discovery algorithms, _e.g.,_ MFinder [50], MAVisto [51], Kavosh [36], _etc._ The standard interpretation of the motif counting is presented in terms of a null model, which is a randomized version of the real-world network [35; 52; 22; 53]. Another research line of network motifs focuses on improving network representation learning with local motifs [54; 55; 56]. These approaches emphasize the advantages of incorporating motifs into representation learning, leading to improved performance on downstream tasks. In this work, we constitute the first attempt to involve temporal motifs in the explanation task and target to uncover the decision-making logic of temporal GNNs. ## 3 Preliminaries and Problem Formulation **Temporal Graph Neural Network**. We treat the temporal graph as a sequence of continuous timestamped events, following the setting in TGNNExplainer [23]. Formally, a temporal graph can be represented as a function of timestamp \(t\) by \(\mathcal{G}(t)=\{\mathcal{V}(t),\mathcal{E}(t)\}\), where \(\mathcal{V}(t)\) and \(\mathcal{E}(t)\) denote the set of nodes and events that occur before timestamp \(t\). Each element \(e_{k}\) in \(\mathcal{E}(t)\) is represented as \(e_{k}=(u_{k},v_{k},t_{k},a_{k})\), denoting that node \(u_{k}\) and node \(v_{k}\) have an interaction event at timestamp \(t_{k}<t\) with the event attribution \(a_{k}\). Without loss of generality, we assume that interaction is undirected [5; 1]. Temporal Graph Neural Networks (TGNN) take as input a temporal graph \(\mathcal{G}(t)\) and learn a time-aware embedding for each node in \(\mathcal{V}(t)\). TGNNs' capability for representation learning on temporal graphs is typically evaluated by their link prediction performance [57; 1; 25], _i.e.,_ predicting the future interaction based on historical events. In this work, we also focus on explaining the link prediction behavior of TGNNs, which can be readily extended to node classification tasks. **Explanation for Temporal Graph Neural Network**. Let \(f\) denote a well-trained TGNN (_aka._ base model). To predict whether an interaction event \(e\) happens between \(u\) and \(v\) at timestamp \(t\), the base model \(f\) leverages the time-aware node representation \(x_{u}(t)\) and \(x_{v}(t)\) to output the logit/probability. An explainer aims at identifying a subset of important historical events from \(\mathcal{E}(t)\) that trigger the future interaction prediction made by the base model \(f\). The subset of important events is known as an explanation. Formally, let \(Y_{f}[e]\) denote the binary prediction of event \(e\) made by base model \(f\), the explanation task can be formulated as the following problem that optimizes the mutual information between the explanation and the original model prediciton [13; 23]: \[\operatorname*{argmax}_{|\mathcal{G}^{e}_{\text{exp}}|\leq K}I(Y_{f}[e]; \mathcal{G}^{e}_{\text{exp}})\quad\Leftrightarrow\quad\operatorname*{argmin}_{| \mathcal{G}^{e}_{\text{exp}}|\leq K}-\sum_{c=0,1}\mathbb{1}(Y_{f}[e]=c)\log(f( \mathcal{G}^{e}_{\text{exp}})[e]) \tag{1}\] where \(I(\cdot,\cdot)\) denotes the mutual information function, \(e\) is the interaction event to be explained, \(\mathcal{G}^{e}_{\text{exp}}\) denotes the explanation constructed by important events from \(\mathcal{V}(t)\) for \(e\). \(f(\mathcal{G}^{e}_{\text{exp}})[e]\) is the probability output on the event \(e\) predicted by the base model \(f\). \(K\) is the explanation budget on the explanation size (_i.e.,_ the number of events in \(\mathcal{G}^{e}_{\text{exp}}\)). Proposed Method: TempME A simple optimization of Eq. 1 easily results in disjointed explanations [23]. Therefore, we utilize temporal motifs to ensure that the generated explanations are meaningful and understandable. The pipeline of TempME is shown in Figure 2. Given a temporal graph and a future prediction between node \(u\) and node \(v\) to be explained, TempME first samples surrounding temporal motif instances (Sec. 4.1). Then a Motif Encoder creates expressive Motif Embedding for each extracted motif instance, which consists of three main steps: event anonymization, message passing, and graph pooling (Sec. 4.2). Based on Information-bottleneck (IB) principle, TempME characterizes the importance scores of these temporal motifs, under the constraints of both explanation accuracy and information compression (Sec. 4.3). In the explanation stage, succinct and cohesive explanations are constructed by sampling from the Bernoulli distribution controlled by the importance score \(p\) for the prediction behavior of the base model. ### Temporal Motif Extraction We first extract a candidate set of motifs whose importance scores are to be derived. Intuitively, event orders encode temporal causality and correlation. Therefore, we constrain events to reverse over the direction of time in each motif and propose the following Retrospective Temporal Motif. **Definition 1**.: _Given a temporal graph and node \(u_{0}\) at time \(t_{0}\), a sequence of \(l\) events, denotes as \(I=\{(u_{1},v_{1},t_{1}),(u_{2},v_{2},t_{2}),\cdots,(u_{l},v_{l},t_{l})\}\) is a \(n\)-node, \(l\)-length, \(\delta\)-duration **Retrospective Temporal Motif** of node \(u_{0}\) if the events are reversely time ordered within a \(\delta\) duration, i.e., \(t_{0}>t_{1}>t_{2}\cdots>t_{l}\) and \(t_{0}-t_{l}\leq\delta\), such that \(u_{1}=u_{0}\) and the induced subgraph is connected and contains \(n\) nodes._ Temporal dependencies are typically revealed by the relative order of the event occurrences rather than the absolute time difference. Consequently, we have the following definition of equivalence. **Definition 2**.: _Two temporal motif instances \(I_{1}\) and \(I_{2}\) are **equivalent** if they have the same topology and their events occur in the same order, denoted as \(I_{1}\simeq I_{2}\)._ Temporal Motifs are regarded as important building blocks of complex dynamic systems [52, 35, 22, 50]. Due to the large computational complexity in searching high-order temporal motifs, recent works show the great potential of utilizing lower-order temporal motifs, _e.g.,_ two-length motifs [52] and three-node motifs [55, 24, 58], as units to analyze large-scale real-world temporal graphs. A collection of temporal motifs with up to \(3\) nodes and \(3\) events is shown in Appendix B. Given a temporal graph with historical events \(\mathcal{E}\) and node \(u_{0}\) of interest at time \(t_{0}\), we sample \(C\) retrospective temporal motifs with at most \(n\) nodes and \(l\) events, starting from \(u_{0}\) (\(\delta\) is usually set as a large for the comprehensiveness of motifs). Alg. 1 shows our Temporal Motif Sampling approach, where \(\mathcal{E}(S,t)\) denotes the set of historical events that occur to any node in \(S\) before time \(t\). At each step, we sample one event from the set of historical events related to the current node set. Alg. 1 adapts Mfinder [50], a motif mining algorithm on static graphs, to the scenario of temporal graphs. We could also assign different sampling probabilities to historical events in Step 3 in Alg. 1 to obtain temporally biased samples. Since the purpose of our sampling is to collect a candidate set of expressive temporal motifs for the explanation, we implement uniform sampling in Step 3 for algorithmic efficiency. ``` Node set: \(S_{c}\leftarrow\{u_{0}\}\), for \(1\leq c\leq C\) Event sequence: \(I_{c}\leftarrow()\), for \(1\leq c\leq C\) for\(c=1\) to \(C\)do for\(j=1\) to \(l\)do Sample one event \(e_{j}=(u_{j},v_{j},t_{j})\) from \(\mathcal{E}(S_{c},t_{j-1})\) if\(|S_{c}|<n\)then \(S_{c}=S_{c}\cup\{u_{j},v_{j}\}\) \(I_{c}=I_{c}\parallel e_{j}\) return\(\{I_{c}\mid 1\leq c\leq C\}\) ``` **Algorithm 1**Temporal Motif Sampling Figure 2: Framework of TempME. Numbers on the edge denote the event order. **Relation to Previously Proposed Concepts**. Recent works [1; 6] propose to utilize temporal walks and recurrent neural networks (RNN) [59] to aggregate sequential information. Conceptually, temporal walks construct a subset of temporal motif instances in this work. In contrast, temporal motifs capture more graph patterns for a holistic view of the governing rules in dynamic systems. For instance, the motif of preferential attachment (Fig. 1(c)) cannot be represented as temporal walks. ### Temporal Motif Embedding In the present work, we focus on explaining the link prediction of temporal graph neural networks. Given an interaction prediction between node \(u\) and node \(v\) to be explained, we sample \(C\) surrounding temporal motif instances starting from \(u\) and \(v\), respectively, denoted as \(M_{u}\) and \(M_{v}\). Note the proposed framework is also flexible for explaining other graph-related problems. For instance, to explain the node classification on dynamic graphs, we sample \(C\) temporal motif instances around the node of interest. Each temporal motif is represented as \((e_{1},e_{2},\cdots,e_{l})\) with \(e_{i}=(u_{i},v_{i},t_{i})\) satisfying Definition 1. We design a Motif Encoder to learn motif-level representations for each surrounding motif in \(M_{u}\) and \(M_{v}\). **Event Anonymization**. The anonymization technique is at the core of many sequential feature distillation algorithms [1; 6; 60; 61]. Previous works [6; 1] mainly focus on node anonymization, while temporal motifs are constructed by sequences of temporal events. To bridge this gap, we consider the following event anonymization to adapt to temporal motifs. To maintain the inductiveness, we create structural features to anatomize event identities by counting the appearance at certain positions: \[h(e_{i},u,v)[j]=\left|\{I\mid I\in M_{u}\cup M_{v},I[j]=(u_{i},v_{i},t); \forall t\}\right|,\text{ for }i\in\{1,2,\cdots,l\}. \tag{2}\] \(h(e_{i},u,v)\) (abbreviated as \(h(e_{i})\) for simplicity) is a \(l\)-dimensional structural feature of \(e_{i}\) where the \(j\)-th element denotes the number of interactions between \(u_{i}\) and \(v_{i}\) at the \(j\)-th sampling position in \(M_{u}\cup M_{v}\). \(h(e_{i})\) essentially encodes both spatial and temporal roles of event \(e_{i}\). **Temporal Motif Encoding**. The extracted temporal motif is essentially a subgraph of the original temporal graph. Instead of using sequential encoders, we utilize local message passing to distill motif-level embedding. Given a motif instance \(I\) with node set \(\mathcal{V}_{I}\) and event set \(\mathcal{E}_{I}\), let \(X_{p}\) denote the associated feature of node \(p\in\mathcal{V}_{I}\). \(E_{pq}=(a_{pq}\parallel T(t-t_{pq})\parallel h(e_{pq}))\) denotes the event feature of event \(e_{pq}\in\mathcal{E}_{I}\), where \(a_{pq}\) is the associated event attribute and \(h(e_{pq})\) refers to the structural feature of event \(e_{pq}\) (Eq. 2). Note that the impact of motifs varies depending on the time intervals. For instance, motifs occurring within a single day differ from those occurring within a year. Thus, we need a time encoder \(T(\cdot)\) which maps the time interval into \(2d\)-dimensional vectors via \(T(\Delta t)=\sqrt{1/d}[\cos(w_{1}\Delta t),\sin(w_{1}\Delta t),\cdots,\cos(w_{ d}\Delta t),\sin(w_{d}\Delta t)]\) with learnable parameters \(w_{1},\cdots,w_{d}\)[52; 1]. To derive the motif-level embedding, we initially perform message passing to aggregate neighboring information and then apply the Readout function to pool node features. \[\bar{X_{p}}=\textsc{MessagePassing}(X_{p};\{X_{q};E_{pq}|q\in\mathcal{N}(p)\}) \text{ and }m_{I}=\textsc{Readout}(\{\bar{X_{p}},p\in\mathcal{V}_{I}\}) \tag{3}\] Following Eq. 3, one may use GIN [62] or GAT [63] in MessagePassing step and simple mean-pooling or learnable adaptive-pooling [64] as Readout function to further capture powerful motif representations. We refer to Appendix D.4 for more details about the Temporal Motif Encoder. ### Information-Bottleneck-based Generator **Motivation**. A standard analysis for temporal motif distribution is typically associated with the null model, a randomized version of the empirical network [22; 50; 35]. The temporal motif that behaves statistically differently in the occurrence frequency from that of the null model is considered to be structurally significant. Therefore, we assume the information of temporal motifs can be disentangled into interaction-related and interaction-irrelevant ones. The latter is natural result of the null model. Based on this assumption, we resort to the information bottleneck technique to extract compressed components that are the most interaction-related. We refer to Appendix C for theoretical proofs. **Sampling from Distribution**. Given an explanation query and a motif embedding \(m_{I}\) with \(I\in\mathcal{M}\), where \(\mathcal{M}\) denotes the set of extracted temporal motifs, we adopt an MLP for mapping \(m_{I}\) to an importance score \(p_{I}\in[0,1]\), which measures the significance of this temporal motif instance for the explanation query. We sample a mask \(\alpha_{I}\sim\textsc{Bernoulli}(p_{I})\) for each temporal motif instance and then apply the masks to screen for a subset of important temporal motif instances via \(\mathcal{M}_{\text{exp}}=A\odot\mathcal{M}\). \(A\) is the mask vector constructed by \(\alpha_{I}\) for each motif \(I\) and \(\odot\) denotes element-wise product. The explanation subgraph for the query can thus be induced by all events that occur in \(\mathcal{M}_{\texttt{exp}}\). To back-propagate the gradients _w.r.t._ the probability \(p_{I}\) during the training stage, we use the Concrete relaxation of the Bernoulli distribution [65] via \(\texttt{Bernoulli}(p)\approx\sigma(\frac{1}{\lambda}(\log p-\log(1-p)+\log u- \log(1-u)))\), where \(u\sim\texttt{Uniform}(0,1)\), \(\lambda\) is a temperature for the Concrete distribution and \(\sigma\) is the sigmoid function. In the inference stage, we randomly sample discrete masks from the Bernoulli distribution without relaxation. Then we induce a temporal subgraph with \(\mathcal{M}_{\texttt{exp}}\) as the explanation. One can also rank all temporal motifs by their importance scores and select the Top \(K\) important motifs to induce more compact explanations if there is a certain explanation budget in practice. **Information Bottleneck Objective**. Let \(\mathcal{G}^{e}_{\texttt{exp}}\) and \(\mathcal{G}(e)\) denote the explanation and the computational graph of event \(e\) (_i.e.,_ historical events that the base model used to predict \(e\)). To distill the most interaction-related while compressed explanation, the IB objective maximizes mutual information with the target prediction while minimizing mutual information with the original temporal graph: \[\min-I(\mathcal{G}^{e}_{\texttt{exp}},Y_{f}[e])+\beta I(\mathcal{G}^{e}_{ \texttt{exp}},\mathcal{G}(e)),\quad\text{s.t.}\ |\mathcal{G}^{e}_{\texttt{exp}}|\leq K \tag{4}\] where \(Y_{f}[e]\) refers to the original prediction of event \(e\), \(\beta\) is the regularization coefficient and \(K\) is a constraint on the explanation size. We then adjust Eq. 4 to incorporate temporal motifs. The first term in Eq. 4 can be estimated with the cross-entropy between the original prediction and the output of base model \(f\) given \(\mathcal{G}^{e}_{\texttt{exp}}\) as Eq. 1, where \(\mathcal{G}^{e}_{\texttt{exp}}\) is induced by \(\mathcal{M}_{\texttt{exp}}\). Since temporal motifs are essential building blocks of the surrounding subgraph and we have access to the posterior distribution of \(\mathcal{M}_{\texttt{exp}}\) conditioned on \(\mathcal{M}\) with importance scores, we propose to formulate the second term in Eq. 4 as the mutual information between the original motif set \(\mathcal{M}\) and the selected motif subset \(\mathcal{M}_{\texttt{exp}}\). We utilize a variational approximation \(\mathbb{Q}(\mathcal{M}_{\texttt{exp}})\) to replace its marginal distribution \(\mathbb{P}(\mathcal{M}_{\texttt{exp}})\) and obtain the upper bound of \(I(\mathcal{M},\mathcal{M}_{\texttt{exp}})\) with Kullback-Leibler divergence: \[I(\mathcal{M},\mathcal{M}_{\texttt{exp}})\leq\mathbb{E}_{\mathcal{M}}D_{ \text{KL}}(\mathbb{P}_{\phi}(\mathcal{M}_{\texttt{exp}}|\mathcal{M});\mathbb{ Q}(\mathcal{M}_{\texttt{exp}})) \tag{5}\] where \(\phi\) involve learnable parameters in Motif Encoder (Eq. 3) and the MLP for importance scores. **Choice of Prior Distribution**. Different choices of \(\mathbb{Q}(\mathcal{M}_{\texttt{exp}})\) in Eq. 5 may lead to different inductive bias. We consider two practical prior distributions for \(\mathbb{Q}(\mathcal{M}_{\texttt{exp}})\): _uniform_ and _empirical_. In the _uniform_ setting [66, 42], \(\mathbb{Q}(\mathcal{M}_{\texttt{exp}})\) is the product of Bernoulli distributions with probability \(p\in[0,1]\), that is, each motif shares the same probability \(p\) being in the explanation. The KL divergence thus becomes \(D_{\text{KL}}(\mathbb{P}_{\phi}(\mathcal{M}_{\texttt{exp}}|\mathcal{M}); \mathbb{Q}(\mathcal{M}_{\texttt{exp}}))=\sum_{I_{i}\in\mathcal{M}}p_{I_{i}} \log\frac{p_{I_{i}}}{p}+(1-p_{I_{i}})\log\frac{1-p_{I_{i}}}{1-p}\). Here \(p\) is a hyperparameter that controls both the randomness level in the prior distribution and the prior belief about the explanation volume (_i.e.,_ the proportion of motifs that are important for the prediction). However, _uniform_ distribution ignores the effect of the null model, which is a better indication of randomness in the field of temporal graphs. To tackle this challenge, we further propose to leverage the null model to define _empirical_ prior distribution for \(\mathbb{Q}(\mathcal{M}_{\texttt{exp}})\). A null model is essentially a randomized version of the empirical network, generated by shuffling or randomizing certain properties while preserving some structural aspects of the original graph. Following prior works on the null model [67, 22], we utilize the common null model in this work, where the event order is randomly shuffled. The null model shares the same degree spectrum and time-shuffled event orders with the input graph [53] (see more details in Appendix D.1). We categorize the motif instances in \(\mathcal{M}\) by their equivalence relation defined in Definition 2. Let \((U_{1},\cdots,U_{T})\) denote \(T\) equivalence classes of temporal motifs and \((q_{1},\cdots,q_{T})\) is the sequence of normalized class probabilities occurring in \(\mathcal{M}_{\texttt{exp}}\) with \(q_{i}=\sum_{I_{j}\in U_{i}}p_{I_{j}}/\sum_{I_{j}\in\mathcal{M}}p_{I_{j}}\), where \(p_{I_{j}}\) is the importance score of the motif instance \(I_{j}\). Correspondingly, we have \((m_{1},\cdots,m_{T})\) denoting the sequence of normalized class probabilities in the null model. The prior belief about the average probability of a motif being important for prediction is fixed as \(p\). Thus minimizing Eq. 5 is equivalent to the following equation. \[\min_{\phi}D_{\text{KL}}(\mathbb{P}_{\phi}(\mathcal{M}_{\texttt{exp}}| \mathcal{M});\mathbb{Q}(\mathcal{M}_{\texttt{exp}}))\Leftrightarrow\min_{\phi}( 1-s)\log\frac{1-s}{1-p}+s\sum_{i=1}^{T}q_{i}\log\frac{sq_{i}}{pm_{i}}, \tag{6}\] where \(s\) is computed by \(s=\sum_{I_{j}\in\mathcal{M}}p_{I_{j}}/|\mathcal{M}|\), which measures the sparsity of the generated explanation. Combing Eq. 1 and Eq. 6 leads to the following overall optimization objective: \[\min_{\phi}\mathbb{E}_{e\in\mathcal{E}(t)}\sum_{c=0,1}-\mathbb{1}(Y_{f}[e]=c) \log(f(\mathcal{G}^{e}_{\texttt{exp}})[e])+\beta((1-s)\log\frac{1-s}{1-p}+s \sum_{i=1}^{T}q_{i}\log\frac{sq_{i}}{pm_{i}}). \tag{7}\] Eq. 7 aims at optimizing the explanation accuracy with the least amount of information. It learns to identify the most interaction-related temporal motifs and push their importance scores close to 1, leading to deterministic existences of certain motifs in the target explanation. Meanwhile, the interaction-irrelevant components are assigned smaller importance scores to balance the trade-off in Eq. 7. TempME shares spirits with perturbation-based explanations [68; 69], where "interpretable components [68]" corresponds to temporal motifs and the "reference" is the null model. **Complexity**. A brute-force implementation of the sampling algorithm (Alg. 1) has the time complexity \(\mathcal{O}(Cl)\). Following Liu et al. [52], we create a \(2l\)-digit to represent a temporal motif with \(l\) events, where each pair of digits is an event between the node represented by the first digit and the node represented by the second digit. We utilize these \(2l\)-digits to classify the temporal motifs by their equivalence relations, thus resulting in a complexity of \(\mathcal{O}(C)\). An acceleration strategy with the tree-structured sampling and detailed complexity analysis are given in Appendix D.3. ## 5 Experiments ### Experimental Setups **Dataset**. We evaluate the effectiveness of TempME on six real-world temporal graph datasets, Wikipedia, Reddit, Enron, UCI, Can.Parl., and US Legis [70; 71; 72] that cover a wide range of domains. Wikipedia and Reddit are bipartite networks with rich interaction attributes. Enron and UCI are social networks without any interaction attributes. Can.Parl. and US Legis are two political networks with a single attribute. Detailed dataset statistics are given in Appendix E.1. **Base Model**. The proposed TempME can be employed to explain any temporal graph neural network (TGNN) that augments local message passing. We adopt three state-of-the-art temporal graph neural networks as the base model: TGAT [3], TGN [4], and GraphMixer [5]. TGN and GraphMixer achieve high performance with only one layer due to their powerful expressivity or memory module. TGAT typically contains 2-3 layers to achieve the best performance. Following previous training setting [23; 1; 6], we randomly sample an equal amount of negative links and consider event prediction as a binary classification problem. All models are trained in an inductive setting [6; 1]. **Baselines**. Assuming the base model contains \(L\) layers and we aim at explaining the prediction on the event \(e\), we first extract \(L\)-hop neighboring historical events as the computational graph \(\mathcal{G}(e)\). For baselines, we first compare with two self-interpretable techniques, Attention (ATTN) and Gradient-based Explanation (Grad-CAM [37]). For ATTN, we extract the attention weights in TGAT and TGN and take the average across heads and layers as the importance scores for events. For Grad-CAM, we calculate the gradient of the loss function _w.r.t._ event features and take the norms as the importance scores. Explanation \(\mathcal{G}^{e}_{\text{exp}}\) is generated by ranking events in \(\mathcal{G}(e)\) and selecting a subset of explanatory events with the highest importance scores. We further compare with learning-based approaches, GNNExplainer [13], PGExplainer [14] and TGNNExplainer [23], following the baseline setting in prior work [23]. The former two are proposed to explain static GNNs while TGNNExplainer is a current state-of-the-art model specifically designed for temporal GNNs. **Configuration**. Standard fixed splits [73; 72] are applied on each datasets. Following previous studies on network motifs [52; 56; 22; 35], we have empirically found that temporal motifs with at most 3 nodes and 3 events are sufficiently expressive for the explanation task (Fig. 4). We use GINE [74], a modified version of GIN [62] that incorporates edge features in the aggregation function, as the MessagePassing function and mean-pooling as the Readout function by default. ### Explanation Performance **Evaluation Metrics**. To evaluate the explanation performance, we report Fidelity and Sparsity following TGNNExplainer [23]. Let \(\mathcal{G}^{e}_{\text{exp}}\) and \(\mathcal{G}\) denote the explanation for event \(e\) and the original temporal graph, respectively. Fidelity measures how valid and faithful the explanations are to the model's original prediction. If the original prediction is positive, then an explanation leading to an increase in the model's prediction logit is considered to be more faithful and valid and vice versa. Fidelity is defined as \(\texttt{Fid}(\mathcal{G},\mathcal{G}^{e}_{\text{exp}})=\mathbb{1}(Y_{f}[e]=1) (f(\mathcal{G}^{e}_{\text{exp}})[e]-f(\mathcal{G})[e])+\mathbb{1}(Y_{f}[e]=0) (f(\mathcal{G})[e]-f(\mathcal{G}^{e}_{\text{exp}})[e])\). Sparsity is defined as \(\texttt{Sparsity}=|\mathcal{G}^{e}_{\text{exp}}|/|\mathcal{G}(e)|\), where \(\mathcal{G}(e)\) denotes the computational graph of event \(e\). An ideal explanation should be compact and succinct, therefore, higher fidelity with lower Sparsity denotes a better explanation performance. Besides, we further adopt the ACC-AUC metric, which is the AUC value of the proportion of generated explanations that have the same predicted label by the base model over sparsity levels from \(0\) to \(0.3\). **Results.** Table 1 shows the explanation performance of TempME and other baselines _w.r.t._ ACC-AUC. TempME outperforms baselines on different datasets and base models in general. Notably, TempME achieves state-of-the-art performance in explaining TGN with strong ACC-AUC results (\(\geq 90\%\)) over all six datasets. Specifically, the effectiveness of TempME is consistent across datasets with and without attributes, whereas the performance of baseline models exhibits considerable variation. For example, ATTN and Grad-CAM work well on datasets with rich attributes, _e.g.,_ Wikipedia and Reddit, while may yield poor performances on unattributed datasets. Therefore, events with large gradients or attention values are not sufficient to explain the decision-making logic of the base model. Figure 3 demonstrates the Fidelity-Sparsity curves of TempME and compared baselines on Wikipedia with different base models. From Figure 3, we observe that TempME surpasses the baselines in terms of explanation fidelity, especially with a low sparsity level. In addition, it reveals that the optimal sparsity level varies among different base models. For TGAT, increasing sparsity initially diminishes and later enhances the general fidelity. Conversely, for TGN and GraphMixer, increasing sparsity consistently improves fidelity. These findings indicate that TGAT gives priority to a narrow subset (_e.g.,_\(1\%\)) of historical events, while TGN and GraphMixer rely on a wider range of historical events. **Cohesiveness**. To evaluate the cohesive level of the explanations, we propose the following metric: \[\texttt{Cohesiveness}=\frac{1}{|\mathcal{G}_{\texttt{exp}}^{e}|^{2}-| \mathcal{G}_{\texttt{exp}}^{e}|}\sum_{e_{i}\in\mathcal{G}_{\texttt{exp}}^{e} }\sum_{e_{j}\in\mathcal{G}_{\texttt{exp}}^{e};e_{i}\neq e_{j}}\cos(\frac{|t_ {i}-t_{j}|}{\Delta T})\mathbbm{1}(e_{i}\sim e_{j}), \tag{8}\] \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline & & **Wikipedia** & **Reddit** & **UCI** & **Enron** & **USLegis** & **Can.Parl.** \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & Random & 70.91\(\pm\)1.03 & 81.97\(\pm\)0.92 & 54.51\(\pm\)0.52 & 48.94\(\pm\)1.28 & 54.24\(\pm\)1.34 & 51.66\(\pm\)2.26 \\ & ATTN & 77.31\(\pm\)0.01 & 86.80\(\pm\)0.01 & 27.25\(\pm\)0.01 & 68.28\(\pm\)0.01 & 62.24\(\pm\)0.00 & 79.92\(\pm\)0.01 \\ & Grad-CAM & 83.11\(\pm\)0.01 & 90.29\(\pm\)0.01 & 26.06\(\pm\)0.01 & 19.93\(\pm\)0.01 & 78.98\(\pm\)0.01 & 50.42\(\pm\)0.01 \\ & GNNExplainer & 84.34\(\pm\)0.16 & 89.44\(\pm\)0.56 & 62.38\(\pm\)0.46 & 77.82\(\pm\)0.88 & 94.92\(\pm\)0.50 & 80.59\(\pm\)0.58 \\ & PGExplainer & 84.26\(\pm\)0.78 & 92.31\(\pm\)0.92 & 95.47\(\pm\)1.68 & 62.37\(\pm\)1.82 & 91.42\(\pm\)0.94 & 75.92\(\pm\)1.12 \\ & TGNNExplainer & 85.74\(\pm\)0.56 & 95.73\(\pm\)0.36 & 86.26\(\pm\)2.62 & **82.02\(\pm\)1.94** & 90.37\(\pm\)0.84 & 80.67\(\pm\)1.49 \\ & **TempME** & **85.81\(\pm\)0.53** & **96.69\(\pm\)0.38** & **76.47\(\pm\)0.80** & 81.85\(\pm\)0.26 & **96.10\(\pm\)0.20** & **84.48\(\pm\)0.97** \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & Random & 91.90\(\pm\)1.42 & 91.42\(\pm\)1.94 & 87.15\(\pm\)2.23 & 82.72\(\pm\)2.24 & 72.31\(\pm\)2.64 & 76.43\(\pm\)1.65 \\ & ATTN & 93.28\(\pm\)0.01 & 93.81\(\pm\)0.01 & 83.24\(\pm\)0.01 & 83.57\(\pm\)0.01 & 75.62\(\pm\)0.01 & 79.38\(\pm\)0.01 \\ & Grad-CAM & 93.46\(\pm\)0.01 & 92.60\(\pm\)0.01 & 87.51\(\pm\)0.01 & 81.12\(\pm\)0.01 & 81.46\(\pm\)0.01 & 77.19\(\pm\)0.01 \\ & GNNExplainer & 95.62\(\pm\)0.53 & 95.05\(\pm\)0.35 & 94.68\(\pm\)0.42 & 88.61\(\pm\)0.50 & 82.91\(\pm\)0.46 & 83.32\(\pm\)0.64 \\ & PGExplainer & 94.28\(\pm\)0.84 & 94.42\(\pm\)0.36 & 92.39\(\pm\)0.85 & 88.34\(\pm\)1.24 & 90.62\(\pm\)0.75 & 88.46\(\pm\)1.42 \\ & TGNNExplainer & 93.51\(\pm\)0.98 & 96.21\(\pm\)0.47 & 94.24\(\pm\)0.52 & 90.32\(\pm\)0.82 & 90.40\(\pm\)0.83 & 84.70\(\pm\)1.19 \\ & **TempME** & **95.80\(\pm\)0.42** & **98.66\(\pm\)0.80** & **96.34\(\pm\)0.30** & **92.64\(\pm\)0.27** & **94.37\(\pm\)0.88** & **90.63\(\pm\)0.72** \\ \hline \multirow{8}{*}{ \begin{tabular}{} \end{tabular} } & Random & 77.31\(\pm\)2.37 & 85.08\(\pm\)0.72 & 53.56\(\pm\)1.27 & 64.07\(\pm\)0.86 & 85.54\(\pm\)0.93 & 87.79\(\pm\)0.51 \\ & Grad-CAM & 76.63\(\pm\)0.01 & 84.44\(\pm\)0.41 & 82.64\(\pm\)0.01 & 72.50\(\pm\)0.01 & 88.98\(\pm\)0.01 & 85.80\(\pm\)0.01 \\ \cline{1-1} & GNNExplainer & 89.21\(\pm\)0.63 & 95.10\(\pm\)0.36 & 61.02\(\pm\)0.37 & 74.23\(\pm\)0.13 & 89.67\(\pm\)0.35 & 92.28\(\pm\)0.10 \\ \cline{1-1} & PGExplainer & 85.19\(\pm\)1.24 & 92.46\(\pm\)0.42 & 63.76\(\pm\)1.06 & 75.39\(\pm\)0.43 & 92.37\(\pm\)0.10 & 90.63\(\pm\)0.32 \\ \cline{1-1} & TGNNExplainer & 86.79\(\pm\)0.86 & **95.82\(\pm\)0.73** & 80.47\(\pm\)0.87 & **81.81\(\pm\)0.45** & 93.04\(\pm\)0.45 & 93.78\(\pm\)0.74 \\ \cline{1-1} & **TempME** & **90.15\(\pm\)0.30** & 95.05\(\pm\)0.19 & **87.06\(\pm\)0.12** & 79.69\(\pm\)0.33 & **95.00\(\pm\)0.16** & **95.98\(\pm\)0.21** \\ \hline \hline \end{tabular} \end{table} Table 1: ACC-AUC of TempME and baselines over six datasets and three base models. The AUC values are computed over 16 sparsity levels between 0 and 0.3 at the interval of 0.02. The best result is in **bold** and second best is underlined. Figure 3: Fidelity-Sparsity Curves on Wikipedia dataset with different base models where \(\Delta T\) means the time duration in the computational graph \(\mathcal{G}(e)\), \(1\left(e_{i}\sim e_{j}\right)\) indicates whether \(e_{i}\) is spatially adjacent to \(e_{j}\). Meanwhile, temporally proximate event pairs are assigned with larger weights of \(\cos(|t_{i}-t_{j}|/\Delta T)\). A higher level of cohesiveness indicates a more cohesive explanation. From Table 2, we observe that ATTN and Grad-CAM excel in generating cohesive explanations compared to learning-based explainers, _e.g.,_ GNNExplainer, TGNNExplainer. However, TempME still surpasses all baselines and achieves the highest cohesiveness levels, primarily due to its ability to extract and utilize self-connected motifs, allowing it to generate explanations that are both coherent and cohesive. **Efficiency Evaluation**. We empirically investigate the efficiency of TempME in terms of the inference time for generating one explanation and report the results for Wikipedia and Reddit on TGAT in Table 3, where the averages are calculated across all test events. GNNExplainer and TGNNExplainer optimize explanations individually for each instance, making them less efficient. Notably, TGNNExplainer is particularly time-consuming due to its reliance on the MCTS algorithm. In contrast, TempME trains a generative model using historical events, which allows for generalization to future unseen events. As a result, TempME demonstrates high efficiency and fast inference. **Motif-enhanced Link Prediction**. The extracted motifs can not only be used to generate explanations but also boost the performance of TGNNs. Let \(m_{I}\) denote the motif embedding generated by the Temporal Motif Encoder (Eq. 3) and \(\mathcal{M}\) is the temporal motif set around the node of interest. We aggregate all these motif embeddings using \(\sum_{I\in\mathcal{M}}m_{I}/|\mathcal{M}|\) and concatenate it with the node representations before the final MLP layer in the base model. The performance of base models on link prediction with and without Motif Embeddings (ME) is shown in Table 4. Motif Embedding provides augmenting information for link prediction and generally improves the performance of base models. Notably, TGAT achieves a substantial boost, with an Average Precision of \(95.31\%\) on USLegis, surpassing the performance of state-of-the-art models on USLegis [72, 73]. More results are given in Appendix E.3. **Ablation Studies**. We analyze the hyperparameter sensitivity and the effect of prior distributions used in TempME, including the number of temporal motifs \(C\), the number of events in the motifs \(l\), and the prior belief about the explanation volume \(p\). The results are illustrated in Figure 4. Firstly, when using smaller motifs (_e.g.,_\(l=2\)), TempME achieves comparable explanation accuracy when a sufficient number of motifs are sampled. However, the accuracy plateaus with fewer temporal motifs when \(l=3\) or \(l=4\). Unfortunately, there are only three equivalence classes for temporal motifs with only two events, limiting the diversity of perspectives in explanations. Following previous analysis on temporal motifs [52, 22, \begin{table} \begin{tabular}{l c c c c} \hline \hline & **UCI** & **Enron** & **USLegis** & **Can.Parl.** \\ \hline TGAT & 76.28 & 65.68 & 72.35 & 65.18 \\ TGAT+ME & **83.65\({}^{(\uparrow 1.737)}\)** & **68.37\({}^{(\uparrow 2.69)}\)** & **95.31\({}^{(\uparrow 22.96)}\)** & **76.35\({}^{(\uparrow 11.17)}\)** \\ TGN & 75.82 & **76.40** & 77.28 & 64.23 \\ \hline TGN+ME & **77.46\({}^{(\uparrow 1.64)}\)** & 75.62\({}^{(\uparrow 0.78)}\)** & **83.90\({}^{(\uparrow 6.62)}\)** & **79.46\({}^{(\uparrow 15.23)}\)** \\ GraphMixer & 89.13 & 69.42 & 66.71 & 76.98 \\ GraphMixer+ME & **90.11\({}^{(\uparrow 0.98)}\)** & **70.13\({}^{(\uparrow 0.71)}\)** & **81.42\({}^{(\uparrow 14.71)}\)** & **79.33\({}^{(\uparrow 2.35)}\)** \\ \hline \hline \end{tabular} \end{table} Table 4: Link prediction results (Average Precision) of base models with Motif Embedding (ME) Figure 4: (a) Hyperparameter sensitivity of number of temporal motifs \(C\) and motif length \(l\). (b) Comparison between uniform and empirical prior distribution in terms of ACC-AUC over sparsity levels from 0 to 0.3. \begin{table} \begin{tabular}{l c c} \hline \hline & **Wikipedia** & **Reddit** \\ \hline Random & 0.02\(\pm\)0.06 & 0.03\(\pm\)0.09 \\ ATTN & 0.02\(\pm\)0.00 & 0.04\(\pm\)0.00 \\ Grad-CAM & 0.03\(\pm\)0.00 & 0.04\(\pm\)0.00 \\ GNNExplainer & 8.24\(\pm\)0.26 & 10.44\(\pm\)0.67 \\ PGExplainer & 0.08\(\pm\)0.01 & 0.08\(\pm\)0.01 \\ TGNNExplainer & 26.87\(\pm\)3.71 & 83.70\(\pm\)16.24 \\ **TempME** & 0.13\(\pm\)0.02 & 0.15\(\pm\)0.02 \\ \hline \hline \end{tabular} \end{table} Table 3: Inference time (seconds) of one explanation for TGAT 24], we suggest considering temporal motifs with up to 3 events in the explanation task for the sake of algorithmic efficiency. Secondly, TempME achieves the highest ACC-AUC when the prior belief of the explanation volume is in the range of \([0.3,0.5]\). Notably, TempME performs better with _empirical_ prior distribution when \(p\) is relatively small, resulting in sparser and more compact explanations. This improvement can be attributed to the incorporation of the null model, which highlights temporal motifs that differ significantly in frequency from the null model. Figure 4 (b) verifies the rationality and effectiveness of the _empirical_ prior distribution in TempME. Additional insight into the role of the null model in explanation generation can be found in the explanation visualizations in Appendix E.5. It is worth noting that when \(p\) is close to 1, _uniform_ prior distribution leads to deterministic existences of all temporal motifs while _empirical_ prior distribution pushes the generated explanations towards the null model, which forms the reason for the ACC-AUC difference of _empirical_ and _uniform_ as \(p\) approaches 1. We further conduct ablation studies on the main components of TempME. We report the explanation ACC-AUC on Wikipedia in Table 5. Specifically, we first replace the GINE convolution with GCN and GAT and replace the mean-pooling with adaptive pooling in the Temporal Motif Encoder. Then we iteratively remove event anonymization and time encoding in the creation of event features before they are fed into the Temporal Motif Encoder (Eq. 3). Results in Table 5 demonstrate that all the above variants lead to performance degradation. Moreover, the Time Encoding results in a more severe performance drop across three base models. We further evaluate the effectiveness of _empirical_ prior distribution by comparing it with _uniform_ prior distribution. In both prior distributions, the prior belief on the explanation size \(p\) is set to \(0.3\). We report the best results in Table 5. We can observe that the _empirical_ prior distribution gains a performance boost across three base models, demonstrating the importance of the null model in identifying the most significant motifs. ## 6 Conclusion and Broader Impacts In this work, we present TempME, a novel explanation framework for temporal graph neural networks. Utilizing the power tool of temporal motifs and the information bottleneck principle, TempME is capable of identifying the historical events that are the most contributing to the predictions made by TGNNs. The success of TempME bridges the gap in explainability research on temporal GNNs and points out worth-exploring directions for future research. For instance, TempME can be deployed to analyze the predictive behavior of different models, screen effective models that can capture important patterns, and online services to improve the reliability of temporal predictions. By enabling the generation of explainable predictions and insights, temporal GNNs can enhance decision-making processes in critical domains such as healthcare, finance, and social networks. Improved interpretability can foster trust and accountability, making temporal GNNs more accessible to end-users and policymakers. However, it is crucial to ensure that the explanations provided by the models are fair, unbiased, and transparent. Moreover, ethical considerations, such as privacy preservation, should be addressed to protect individuals' sensitive information during the analysis of temporal graphs.
2301.08568
Physics-guided neural networks for feedforward control with input-to-state stability guarantees
The increasing demand on precision and throughput within high-precision mechatronics industries requires a new generation of feedforward controllers with higher accuracy than existing, physics-based feedforward controllers. As neural networks are universal approximators, they can in principle yield feedforward controllers with a higher accuracy, but suffer from bad extrapolation outside the training data set, which makes them unsafe for implementation in industry. Motivated by this, we develop a novel physics-guided neural network (PGNN) architecture that structurally merges a physics-based layer and a black-box neural layer in a single model. The parameters of the two layers are simultaneously identified, while a novel regularization cost function is used to prevent competition among layers and to preserve consistency of the physics-based parameters. Moreover, in order to ensure stability of PGNN feedforward controllers, we develop sufficient conditions for analyzing or imposing (during training) input-to-state stability of PGNNs, based on novel, less conservative Lipschitz bounds for neural networks. The developed PGNN feedforward control framework is validated on a real-life, high-precision industrial linear motor used in lithography machines, where it reaches a factor 2 improvement with respect to physics-based mass-friction feedforward and it significantly outperforms alternative neural network based feedforward controllers.
Max Bolderman, Hans Butler, Sjirk Koekebakker, Eelco van Horssen, Ramidin Kamidi, Theresa Spaan-Burke, Nard Strijbosch, Mircea Lazar
2023-01-20T13:33:00Z
http://arxiv.org/abs/2301.08568v2
# Physics-guided neural networks for feedforward control ###### Abstract Currently, there is an increasing interest in merging physics-based methods and artificial intelligence to push performance of feedforward controllers for high-precision back-reformarchics beyond what is achievable with linear feedforward control. In this paper, we develop a systematic design procedure for feedforward control using physics-guided neural networks (PGNNs) that can handle nonlinear and unknown dynamics. PGNNs effectively merge physics-based and NN-based models, and thereby result in nonlinear feedforward controllers with higher performance and the same reliability as classical, linear feedforward controllers. In particular, conditions are presented to validate (after training) and impose (before training) input-to-state stability (ISS) of PGNN feedforward controllers. The developed PGNN feedforward control framework is validated on a real-life, high-precision industrial linear motor used in lithography machines, where it reaches a factor \(2\) improvement with respect to conventional mass-friction feedforward. Feedforward control, neural networks, nonlinear system identification, high-precision back-reformarchics, linear motors. ## I Introduction The field of high-precision methodronics requires continuously innovating control methods to facilitate the ever-increasing demands on both throughput as well as accuracy. For example, wafer scanners in lithography machines used for semiconductor manufacturing [1] require sub-nanometer position accuracy at velocities and accelerations exceeding \(1\ \frac{m}{s}\) and \(30\ \frac{m}{s^{2}}\), respectively, see [2, Chapter 9]. On a similar note, the ability to increase the throughput while decreasing the position error can allow for the use of components that are manufactured with larger tolerances and thereby improving the cost effectiveness when no further accuracy and throughput improvements are required. This is an objective in, for example, the manufacturing industry of printing applications using relatively low-cost stepping motors [3]. Feedforward control is a dominant actor in achieving this high position accuracy, while feedback control predominantly concerns closed-loop stability and disturbance rejection [4]. Robustness against varying references is achieved by generating the feedforward input by passing the reference through a model of the inverse system dynamics, see, e.g., [5, 6, 7]. As a result, performance of these so-called inversion-based feedforward controllers is limited by the accuracy of the model of the inverse dynamics [8]. Conventional feedforward control methods employ parametric models of the inverse system derived from physical knowledge, which are often linear [5, 7], but can also be linear-in-the-parameters (LIP) [6, 9] or nonlinear [10, 11]. Advantages of using physics-based models are the interpretable nature of the models with robust performance due to the reliance on physical knowledge, as well as a systematic stability analysis based on linear system theory. Moreover, methods exist for constructing stable inverses of linear nonminimum phase systems, i.e., systems that have an unstable inverse, using approximate solutions or non-causal filtering [12, 13]. Additionally, the training of linear and LIP models, i.e., offline optimization to find the model parameters for which the model fits the data, is typically a convex optimization problem. On the downside however, using models derived from physical knowledge yields structural model errors due to the intrinsic simplifications when deriving such models, e.g., when neglecting parasitic effects present in real-life applications such as electromagnetic distortions and complex friction forces [14]. Therefore, more general model structures that can learn nonlinear and unknown parasitic effects are needed for improving performance of feedforward controllers. Black-box neural network (NN) models are a good candidate because of their universal approximation capabilities [15], and have been already used in feedforward control in, e.g., [16, 17]. The efficient backpropagation algorithm [18] enables training using large amounts of data, which is required for high-precision back-reformarchic systems due to their high sampling rates (\(>\ \mathcal{O}(10^{3})\ Hz\)). Currently, however, the use of NN-based feedforward controllers in practical applications remains limited, see, e.g., [19, 20] for experimental results. This is explained by the fact that using NNs comes with more difficult design challenges compared to linear physics-based models. For example, NNs require a tedious hyperparameter tuning [21]; the NN training can get stuck in local minima; and NNs suffer from poor extrapolation outside the training data [22]. Moreover, validating stability of NN-based feedforward controllers is non-trivial [23, 24], and the linear stable inversion tools do not apply for the nonlinear NN models. The above analysis suggests that a systematic method for effectively merging physics-based models and NNs would enable feedforward controllers with higher performance for mechatronic systems with complex nonlinear dynamics due to the presence of parasitic effects. In this paper we develop a generalized feedforward control design methodology based on physics-guided neural networks (PGNNs), which effectively merge physics-based and NN-based models within a common model class. PGNNs, as described in this paper, were originally developed and employed in feedforward control design in [25], and can be regarded as a generalization of the parallel linear-NN model that has been used in nonlinear system identification, see, e.g., [26, Chapter 21]. The main contribution of this paper is the development of a generalized feedforward control design method based on PGNNs, with the following key features: 1. _Regularized training_ of PGNNs based on identified physical model parameters, to avoid competition among the NN and physical layer in the PGNN; 2. _Regularized training_ of PGNNs based on the physical model output to promote physical model compliance, and enhance robustness to non-training data; 3. _Input-to-state stability (ISS) guarantees_ for nonlinear PGNN-based feedforward controllers; 4. _Experimental validation_ on a real-life high-precision industrial linear motor for lithography machines. **Remark I.1**: _Compared to previous conference papers [25, 27], we present the following original contributions in this journal paper: \((i)\) methods for regularized training, including the NN weights, and methods for optimal selection of the regularization parameter; \((ii)\) graceful degradation of the PGNN feedforward controller when it is operated on conditions that were not present in the training data; \((iii)\) ISS guarantees for PGNN feedforward controllers, which, in combination with stable inversion methods for linear systems, enable the design of a stable nonlinear feedforward controller for nonminimum phase systems; \((iv)\) novel, real-life experimental results._ The remainder of this paper is organized as follows: Section II introduces inversion-based feedforward control, followed by the problem statement in Section III. The methodology for PGNN feedforward controller design with regularized training and graceful degradation is presented in Section IV. Methods for testing and imposing ISS of PGNN feedforward controllers are developed in Section V. Section VI demonstrates effectiveness of the PGNN feedforward controller on a real-life coreless linear motor and a nonminimum phase simulation example. The main conclusions are summarized in Section VII. For streamlining exposition of the results, in this paper all proofs are reported in Appendices. ## II Inversion-based feedforward control ### _Feedback-feedforward control architecture_ Fig. 1 displays a standard feedback-feedforward control scheme, where \(u(t)\) is the control input, and \(y(t)\) the system output, with time \(t\in\mathbb{R}_{\geq 0}\). A mechatronic motion system typically has a dominant linear part, denoted by \(G(s)\) with \(s\) the Laplace variable, and possible higher order modes \(\sum_{i}G_{i}(s)\). Additionally, \(g(\cdot)\) represents nonlinear parasitic effects that typically arise from manufacturing tolerances. The goal is to minimize the tracking error \(e(k):=r(k)-y(k)\), where \(r(k)\) is the reference, and \(y(k)\) is the measured at discrete-time indices \(k\in\mathbb{Z}_{\geq 0}\). The control input \(u(t)\) is computed at discrete-time instances \(k\) according to \[u(k)=u_{\text{fb}}(k)+u_{\text{ff}}(k), \tag{1}\] where \(u_{\text{fb}}(k)\) is the feedback, and \(u_{\text{ff}}(k)\) the feedforward input. The zero-order-hold (ZOH) in Fig. 1 is a discrete-to-continous (D2C) operator, which lets \(u(t)=u(k)\) for \(t\in[kT_{s},(k+1)T_{s})\), with sampling time \(T_{s}\in\mathbb{R}_{>0}\). The feedback input is of the form \[u_{\text{fb}}(k)=C(q)e(k), \tag{2}\] where \(q\) is the forward-shift operator, e.g., \(e(k)=qe(k-1)\), and \(C(q)\) a rational transfer function. The design of the feedforward controller is presented in the following sections. **Remark II.1**: _In this paper we focus on the tracking problem, i.e., minimizing the output tracking error \(e(k)\) with respect to a desired reference \(r(k)\). Nevertheless, the methods proposed in this work can be extended to reject known disturbances acting on the closed-loop system._ ### _Linear feedforward controller design_ As an introductory example to feedforward controller design, we consider a moving mass experiencing viscous friction, i.e., \[m\ddot{y}(t)=u(t)-f_{v}\dot{y}(t),\quad t\in\mathbb{R}_{\geq 0}. \tag{3}\] In (3), \(m\in\mathbb{R}_{>0}\) is the mass, \(f_{v}\in\mathbb{R}_{>0}\) the viscous friction coefficient, and \(u(t)\) and \(y(t)\) are respectively the force input and position output. Application of the Laplace transform to (3) gives \[Y(s)=G(s)U(s):=\frac{1}{ms^{2}+f_{v}s}U(s). \tag{4}\] The moving mass in (4) is a simple case of the general system displayed in Fig. 1, i.e., without higher order dynamics \(\sum_{i}G_{i}(s)\) and nonlinear parasitic effects \(g(\cdot)\). In order to derive a discrete-time feedforward controller, we discretize the dynamics (4) using ZOH, which gives \[y(k)=G(q)u(k):=\frac{b_{1}q^{-1}+b_{2}q^{-2}}{1+a_{1}q^{-1}+a_{2}q^{-2}}u(k), \tag{5}\] where \(a_{1},a_{2},b_{1},b_{2}\in\mathbb{R}\) are the discrete-time transfer function coefficients. Suppose that the feedforward yields perfect tracking, i.e., \(e(k)=r(k)-y(k)=0\), then (2) gives \(u_{\text{fb}}(k)=0\), such that \(u(k)=u_{\text{ff}}(k)\) from (1). Correspondingly, from (5) we compute the feedforward controller as \[u_{\text{ff}}(k)=G^{-1}(q)r(k)=\frac{1+a_{1}q^{-1}+a_{2}q^{-2}}{b_{1 }q^{-1}+b_{2}q^{-2}}r(k) \tag{6}\] \[=\frac{1}{b_{1}}r(k+1)+\frac{a_{1}}{b_{1}}r(k)+\frac{a_{2}}{b_{2}} r(k-1)-\frac{b_{2}}{b_{1}}u_{\text{ff}}(k-1).\] The linear feedforward controller design directly extends to systems with higher order dynamics. Then, (5) becomes \[y(k)=G(q)u(k):=\frac{q^{-n_{k}}\sum_{i=1}^{n_{k}}b_{i}q^{-i}}{1+ \sum_{i=1}^{n_{a}}a_{i}q^{-i}}u(k), \tag{7}\] with \(n_{a}\in\mathbb{Z}_{\geq 0}\), \(n_{b}\in\mathbb{Z}_{>0}\) the order of the dynamics, and \(n_{k}\in\mathbb{Z}_{\geq 0}\) the number of pure input delays. We denote \(a_{0}=1\) and, following the same reasoning as for (6), obtain the feedforward controller corresponding to (7) as \[u_{\text{ff}}(k)=\sum_{i=0}^{n_{a}}\frac{a_{i}}{b_{1}}r(k+n_{k}+1-i)-\sum_{i= 2}^{n_{b}}\frac{b_{i}}{b_{1}}u_{\text{ff}}(k+1-i). \tag{8}\] In order to implement the feedforward controller (8) we require two assumptions: 1. _Reference preview:_ future reference values up to \(r(k+n_{k}+1)\) are known at time instant \(k\); 2. _Stable inverse dynamics:_\(G^{-1}(q)\) is stable, i.e., has poles within the unit circle, or, equivalently, \(G(q)\) is minimum phase. This requires that the zeros of \(\sum_{i=1}^{n_{b}}b_{i}q^{n_{b}-i}\) are within the unit circle. **Remark II.2**: _When \(G(q)\) is nonminimum phase, it is common practice to either approximate \(G(q)\) using a minimum phase transfer function or to use non-causal filtering to obtain a stable exact inverse of \(G(q)\), see, e.g., [12] for an overview._ ### _Nonlinear feedforward controller design_ The linear feedforward control framework can be extended to deal with the nonlinearities \(g(\cdot)\), by starting from a nonlinear correspondent of (7), i.e., a nonlinear autoregressive exogeneous (NARX) representation, which is given as \[y(k)=h\big{(}[y(k-1),\ldots,y(k-n_{a}), \tag{9}\] \[u(k-n_{k}-1),\ldots,u(k-n_{k}-n_{b})]^{T}\big{)},\] where \(h:\mathbb{R}^{n_{a}+n_{b}}\rightarrow\mathbb{R}\) is a nonlinear function that describes the complete dynamics. We assume that there exists an inverse representation of (9), such that, with a slight abuse of notation, we can write \[u(k)=h^{-1}\big{(}\phi(k)\big{)}, \tag{10}\] where \(\phi(k):=[y(k+n_{k}+1),\ldots,y(k+n_{k}-n_{a}+1),u(k-1),\ldots,u(k-n_{b}+1)]^{T}\). Following the same reasoning as for the linear feedforward (5), we obtain the ideal nonlinear feedforward controller as \[u_{\text{ff}}(k)=h^{-1}\big{(}\phi_{\text{ff}}(k)\big{)}, \tag{11}\] with \(\phi_{\text{ff}}(k):=[r(k+n_{k}+1),\ldots,r(k+n_{k}-n_{a}+1),u_{\text{ff}}(k-1 ),\ldots,u_{\text{ff}}(k-n_{b}+1)]^{T}\). In general, the function \(h^{-1}\) in (11) is unknown. Moreover, the precision demanded by industry exceeds manufacturing tolerances, which implies that a machine-specific \(h^{-1}\) needs to be found. For these reasons, a systematic data-based approach for finding a model of the inverse system dynamics \(h^{-1}\) would be desirable. ### _Identification for feedforward control_ We discuss the three main aspects to perform a direct identification of the inverse dynamics (10): the data set, the model class, and the identification criterion. _Data set:_ we consider the availability of a data set that is generated on the system displayed in Fig. 1, i.e., satisfying dynamics (9). As a result, with a slight abuse of notation, we have \[Z^{N}=\{\phi_{0},u_{0},\ldots,\phi_{N-1},u_{N-1}\}, \tag{12}\] where \(\phi_{i}:=\phi(i)\) and \(u_{i}:=u(i)\) for \(i\in\{0,\ldots,N-1\}\) in (10) during the data generating experiment, where \(N\in\mathbb{Z}_{>0}\) is the number of data points. Fig. 1: Feedback–feedforward control scheme for a system that is described by a nominal linear model \(G(s)\), possible higher order dynamics \(\sum_{i}G_{i}(s)\), and nonlinear parasitic effects \(g(\cdot)\) which can depend on derivatives of \(u\) and \(y\). _Model class:_ we require a model parametrization of the inverse dynamics. **Definition II.1**: _A model parametrization of the inverse dynamics is given as_ \[\hat{u}\big{(}\theta,\phi(k)\big{)}=f\big{(}\theta,\phi(k)\big{)}, \tag{13}\] _where \(\hat{u}\big{(}\theta,\phi(k)\big{)}\) is the prediction of the input \(u(k)\), \(\theta\in\mathbb{R}^{n_{\theta}}\) denotes the vector of parameters with \(n_{\theta}\in\mathbb{Z}_{>0}\), and \(f:\mathbb{R}^{n_{\theta}}\times\mathbb{R}^{n_{a}+n_{b}}\to\mathbb{R}\) is a user-defined function._ _Identification criterion:_ the identification criterion defines the best choice of parameters \(\theta\) to have the model (13) fit the inverse system dynamics (10). Typically, the identification criterion aims to minimize a cost function, i.e., \[\hat{\theta}=\arg\min_{\theta}V(\theta,Z^{N}), \tag{14}\] such as the mean-squared error (MSE) \[V(\theta,Z^{N})=V_{\text{MSE}}(\theta,Z^{N}):=\frac{1}{N}\sum_{i=0}^{N-1} \big{(}u_{i}-\hat{u}(\theta,\phi_{i})\big{)}^{2}. \tag{15}\] Similar to (8) and (11), the feedforward controller is obtained by computing the input \(u(k)\) that yields the output \(y(k)=r(k)\) for the identified model, i.e., (13) with \(\theta=\hat{\theta}\) from (14), such that \[u_{\text{ff}}(k)=\hat{u}\big{(}\hat{\theta},\phi_{\text{ff}}(k)\big{)}=f\big{(} \hat{\theta},\phi_{\text{ff}}(k)\big{)}. \tag{16}\] The model parametrization (13) is a crucial choice made by the user. Conventional methods employ a model that is derived from physical knowledge of the system, such that \[f\big{(}\theta,\phi(k)\big{)}=f_{\text{phy}}\big{(}\theta_{\text{phy}},\phi(k )\big{)}, \tag{17}\] where \(\theta_{\text{phy}}\in\mathbb{R}^{n_{\theta_{\text{phy}}}}\) are the physical parameters. For reasons that become apparent in Section IV, we denote \(\theta_{\text{phy}}^{*}\) as the parameters of the physical model (17) identified according to the MSE identification criterion (14), (15). As an alternative to physics-based models (17), black-box NNs can be used to parametrize the inverse dynamics (10), i.e., \[\hat{u}\big{(}\theta_{\text{NN}},\phi(k)\big{)} =f_{\text{NN}}\big{(}\theta_{\text{NN}},\phi(k)\big{)}\] \[=W_{L+1}\alpha_{L}\Big{(}\ldots\alpha_{1}\big{(}W_{1}\phi(k)+B_{1 }\big{)}\Big{)}+B_{L+1}, \tag{18}\] where \(\alpha_{l}:\mathbb{R}^{n_{l}}\to\mathbb{R}^{n_{l}}\) denotes the aggregation of activation functions with \(n_{l}\in\mathbb{Z}_{>0}\) the number of neurons in layer \(l\in\{0,\ldots,L\}\), and \(L\in\mathbb{Z}_{>0}\) the number of hidden layers. The parameters \(\theta_{\text{NN}}:=[\text{col}(W_{1})^{T},B_{1}^{T},\ldots,\text{col}(W_{L+1 })^{T},B_{L+1}^{T}]^{T}\) are the concatenation of all weights \(W_{l}\in\mathbb{R}^{n_{l}\times n_{l-1}}\) and biases \(B_{l}\in\mathbb{R}^{n_{l}}\), where \(\text{col}(W_{l})\) stacks the columns of \(W_{l}\). A NN model (18) can approximate any continuous nonlinear mapping on a bounded set with arbitrary accuracy, provided that it consists of a sufficient number of neurons and/or hidden layers [15]. This concludes the required preliminary knowledge to define the problem considered in this paper. ## III Problem statement To formally characterize the inherent limitation of a physical model (17), we define the corresponding structural model errors. **Definition III.1**: _Given the physical model (17), we define the structural model error as_ \[g\big{(}\phi(k)\big{)}:=h^{-1}\big{(}\phi(k)\big{)}-f_{\text{phy}}\big{(} \theta_{\text{phy}}^{*},\phi(k)\big{)}. \tag{19}\] _Notice that Fig. 1 displays the structural model error \(g\big{(}\phi(k)\big{)}\) for the situation in which the forward dynamics are considered and the physical model is linear, i.e., \(f_{\text{phy}}\big{(}\theta_{\text{phy}},\phi(k)\big{)}=\theta_{\text{phy}}^{T} \phi(k)\). The inverse dynamics (10) can be rewritten into_ \[u(k)=f_{\text{phy}}\big{(}\theta_{\text{phy}}^{*},\phi(k)\big{)}+g\big{(}\phi( k)\big{)}. \tag{20}\] In order to derive conditions for which the feedforward controller (11) achieves zero tracking error, we define a set of operating conditions \(\mathcal{R}\) as follows. **Definition III.2**: _Denote \(\Phi_{\text{ff}}\subseteq\mathbb{R}^{n_{a}+n_{b}}\) as all regressor points \(\phi_{\text{ff}}(k)\) supplied to the feedforward controller (16) for all references \(r(k)\) and all \(k\). Then, the set of operating conditions \(\mathcal{R}\) is defined as_ \[\mathcal{R}:=\Phi_{\text{ff}}\cup\{\phi_{0},...,\phi_{N-1}\}. \tag{21}\] _As an example, it is possible to obtain the operating conditions \(\mathcal{R}\) for the moving mass (3) by considering maxima and minima on the position, velocity, and acceleration._ By comparing the inverse dynamics (10) with the feed-forward input (11), we obtain the following condition for perfect tracking \[u_{\text{ff}}(k)=f\big{(}\hat{\theta},\phi_{\text{ff}}(k)\big{)}=h^{-1}\big{(} \phi_{\text{ff}}(k)\big{)},\quad\forall\ \phi_{\text{ff}}(k)\in\mathcal{R}. \tag{22}\] Consequently, due to the structural model errors (20), condition (22) cannot be satisfied when using a physical model (17). On the other hand, when using a NN model (18), it is possible to find a sufficient number of hidden layers and neurons per layer in combination with a set of parameters \(\theta_{\text{NN}}^{*}\) such that (22) is satisfied. In practice however, the tedious hyperparameter tuning and non-convex optimization can return suboptimal identified parameters \(\hat{\theta}_{\text{NN}}\) that do not even reach performance of the physics-based feedforward. Additionally, dangerous situations can arise when the data set \(Z^{N}\) does not cover \(\mathcal{R}\) sufficiently well, e.g., as a result of limitations on either time or safety there can be no training data available for all relevant operating conditions. Finally, there is only limited research available for validating stability of NN-based feedforward controllers, see, e.g., [23, 24], and no tools to enforce stability a priori. To solve the above issues of NN-based feedforward controllers, in this paper, we develop a generalized feedforward control design methodology based on PGNNs. To this end, we introduce the formal definition of a PGNN model class. **Definition III.3**: _The PGNN model parameterization of the inverse dynamics is given as_ \[\hat{u}\big{(}\theta,\phi(k)\big{)}=f_{\text{phy}}\big{(}\theta_{\text{phy}}, \phi(k)\big{)}+f_{\text{NN}}\big{(}\theta_{\text{NN}},T\big{(}\phi(k)\big{)} \big{)}, \tag{23}\] where \(\theta=[\theta_{\text{NN}}^{T},\theta_{\text{phy}}^{T}]^{T}\) are the PGNN parameters, and \(T:\mathbb{R}^{n_{a}+n_{b}}\rightarrow\mathbb{R}^{n_{0}}\) is an input transformation, with \(n_{0}\in\mathbb{Z}_{>0}\) the number of NN inputs. **Remark III.1**: _The input transformation \(T\) can be used to improve the numerical properties of the PGNN training, as well as to create physically relevant inputs. As a practical example, consider a moving mass and let \(\phi_{i}=[y_{i+1},y_{i}]^{T}\) with \(y_{i}\) the measured position. Then, choosing \(T(\phi_{i})=\left[\frac{y_{i+1}-y_{i}}{T_{s}},y_{i}\right]^{T}\) with \(T_{s}\in\mathbb{R}_{>0}\) the sampling time, can improve numerical properties of the training when \(T_{s}\) is small, such that \(y_{i+1}\approx y_{i}\). As a second example, let \(y_{i}\) be the measured rotation of a rotary motor. Then, choosing \(T(\phi_{i})=\left[\frac{y_{i+1}-y_{i}}{T_{s}},\text{mod}(y_{i})\right]^{T}\) with \(\text{mod}(y_{i})\) the remainder after division of \(y_{i}\) by \(2\pi\) can improve the training convergence and extrapolation in terms of the rotation \(y_{i}\), since \(T(\cdot)\) imposes the rotational reproducible behaviour._ **Remark III.2**: _The PGNN (23) imposes physical knowledge by embedding a known physical model within a NN. Note that this is different from physics-informed neural networks (PINNs) [28, 29], which impose physical knowledge via a regularization term in the cost function that penalizes deviations of the NN output with respect to the output of the available physical model. The use of PINNs for feedforward control was explored in, e.g., [30]. It is also worth to point out that, conforming to this definition, the PGNNs developed in [31] are PINNs, i.e., the physical model is not embedded within a NN therein._ ## IV Physics-guided neural networks for feedforward control In this section, we introduce a regularized cost function for training the PGNN (24) to avoid competition between the physical and NN layers. We demonstrate that, under suitable assumptions on the model class, training data, and training convergence, the PGNN recovers the inverse system dynamics. Moreover, we consider the situation in which these assumptions are violated and introduce methods to ensure that \(i)\) the PGNN improves with respect to using only the physical model in terms of data fit via an optimized parameter initialization, and \(ii)\) we impose a form of graceful degradation for the PGNN when operated at conditions that were not present in the training data. For ease of presentation, in the remainder of this paper we consider \(T\big{(}\phi(k)\big{)}=\phi(k)\), such that the PGNN (23) reduces to \[\hat{u}\big{(}\theta,\phi(k)\big{)}=f_{\text{phy}}\big{(}\theta_{\text{phy}}, \phi(k)\big{)}+f_{\text{NN}}\big{(}\theta_{\text{NN}},\phi(k)\big{)}. \tag{24}\] ### _Regularized PGNN training and inverse system identification_ The PGNN (24) is generally overparameterized, i.e., the NN can identify parts of the physical model which results in a parameter drift during training. Therefore, we employ a regularized cost function \[V(\theta,Z^{N})=V_{\text{MSE}}(\theta,Z^{N})+\lambda V_{\text{reg}}(\theta), \tag{25}\] with \(\lambda>0\), and the regularization cost is \[V_{\text{reg}}(\theta):=\left\|\begin{bmatrix}\Lambda_{\text{NN}}&0\\ 0&\Lambda_{\text{phy}}\end{bmatrix}\left(\theta-\begin{bmatrix}0\\ \theta_{\text{phy}}^{*}\end{bmatrix}\right)\right\|_{2}^{2}. \tag{26}\] In (26), \(\Lambda_{\text{NN}},\Lambda_{\text{phy}}\) are matrices that define the relative importance of the different regularization terms. Note that, \(\Lambda_{\text{NN}}\) relates to the standard \(\mathcal{L}_{2}\) regularization for the network weights and biases [26, Chapter 7], while \(\Lambda_{\text{phy}}\) solves the overparameterization of the PGNN model (24). **Remark IV.1**: _The hyperparameter \(\lambda\) in (25) relates the importance of the data fit with the regularization costs, and can be tuned using, e.g., the L-curve method. The L-curve displays the data fit (\(x\)-axis) and regularization cost (\(y\)-axis) for varying values of \(\lambda\). The optimal choice for \(\lambda\) is the point where the curvature is largest [32]. An example is shown in Fig. 3 with \(\Lambda_{\text{phy}}=\Lambda_{\text{NN}}=I\) using data that is generated on a coreless linear motor, which is discussed in Section VI. The optimal choice is \(\lambda=0.07\). Although the differences in data fit \(V_{\text{MSE}}\) in Fig. 3 seem minor, the minimal data fit is limited by the noise that is present in the data set. Correspondingly, the effect of \(\lambda\) is enlarged when the noise is subtracted._ Fig. 3: L–curve obtained by training the PGNN (24) according to (14), (25) with \(\Lambda_{\text{phy}}=\Lambda_{\text{NN}}=I\) for \(20\) values of \(\lambda\) logarithmically spaced in \([10^{-5},10]\), and optimal choice \(\lambda=0.07\) (red circle). Fig. 2: Schematic overview of the physics–guided neural network. The PGNN (24) recovers the inverse dynamics (10) when the following assumptions are satisfied. **Assumption IV.1**: _There exists a \(\theta_{\text{NN}}^{\ast}\) such that \(f_{\text{NN}}\big{(}\theta_{\text{NN}}^{\ast},\phi(k)\big{)}=g\big{(}\phi(k)\big{)}\) for all \(\phi(k)\in\mathcal{R}\)._ **Assumption IV.2**: _For \(\theta_{\text{NN}}^{A}\neq\theta_{\text{NN}}^{B}\) with \(f_{\text{NN}}\big{(}\theta_{\text{NN}}^{A},\phi(k)\big{)}\neq f_{\text{NN}} \big{(}\theta_{\text{NN}}^{B},\phi(k)\big{)}\) for some \(\phi(k)\in\mathcal{R}\), it holds that_ \[\frac{1}{N}\sum_{i=0}^{N-1}\big{(}f_{\text{NN}}(\theta_{\text{NN}}^{A},\phi_{i })-f_{\text{NN}}(\theta_{\text{NN}}^{B},\phi_{i})\big{)}^{2}>0. \tag{27}\] **Assumption IV.3**: _The minimization of (25) over \(\theta\) yields a global optimum._ Assumption IV.1 dictates that there should exist a choice of parameters for which the PGNN (24) recovers the original inverse system (10), i.e., the system should be in the model set. Assumption IV.2 relates to the persistence of excitation. **Proposition IV.1** (PGNN consistency): _Consider the PGNN (24) that is used to identify the inverse dynamics (10) according to identification criterion (14) with cost function (25) using \(\Lambda_{\text{NN}}=0\). Suppose that Assumptions IV.1, IV.2, and IV.3 hold. Then, the identified PGNN parameters satisfy \(\hat{\theta}=[\hat{\theta}_{\text{phy}}^{T},\hat{\theta}_{\text{NN}}^{T}]^{T}= [\theta_{\text{phy}}^{\ast^{T}},\theta_{\text{NN}}^{\ast^{T}}]^{T}\)._ See Appendix I. From Assumption IV.1 and (20), it is observed that the identified PGNN recovers the inverse dynamics for all operating conditions, i.e., \[f_{\text{phy}}\big{(}\hat{\theta}_{\text{phy}},\phi(k)\big{)}+f_{\text{NN}} \big{(}\hat{\theta}_{\text{NN}},\phi(k)\big{)}=h^{-1}\big{(}\phi(k)\big{)},\ \forall\ \phi(k)\in\mathcal{R}, \tag{28}\] such that the zero tracking error condition (22) is satisfied. **Remark IV.2**: _A parallel linear-NN model structure was also employed for feedforward control design in [33]. Therein, an alternative regularization method based on orthogonal projection was developed to avoid the competition between the linear and NN layers under the assumption that \(g(\cdot)\) is identically zero outside \(\mathcal{R}\)._ ### _Optimized PGNN parameter selection_ When Assumption IV.1 or IV.3 is violated, recovering the inverse dynamics is not guaranteed, or the training does not find the optimal parameters. The training, i.e., the optimization over \(\theta\) of (25), typically employs a nonlinear optimization scheme, that attains parameter estimates \(\theta^{(j)}\) where \(j\in\{0,\ldots,J\}\), with \(J\in\mathbb{Z}_{\geq 0}\) the number of solver iterations. The estimated parameter vector is chosen as \[\hat{\theta}=\theta^{(j)},\quad j=\text{arg}\min_{j\in\{0,\ldots,J\}}V\big{(} \theta^{(j)},Z^{N}\big{)}. \tag{29}\] In what follows, we will derive a specific choice for the parameters \(\theta^{(j)}\) which achieve a smaller value of the cost function (25) compared to using only the physical model. For simplicity, we consider a LIP physical model (17), i.e., \(f_{\text{phy}}\big{(}\theta_{\text{phy}},\phi(k)\big{)}=\theta_{\text{phy}}^{T} T_{\text{phy}}\big{(}\phi(k)\big{)}\), and rewrite the PGNN (24) according to \[\hat{u}\big{(}\theta,\phi(k)\big{)}=\theta_{\text{L}}^{T}\phi_{\text{L}}\big{(} \theta_{\text{NL}},\phi(k)\big{)}:=\theta_{\text{L}}^{T}\begin{bmatrix}\alpha_ {l}\big{(}\theta_{\text{NL}},\phi(k)\big{)}\\ 1\\ T_{\text{phy}}\big{(}\phi(k)\big{)}\end{bmatrix}, \tag{30}\] where \(\theta_{\text{L}}:=[\text{col}(W_{L+1})^{T},B_{L+1},\theta_{\text{phy}}^{T}]^{T}\) are the parameters in which the PGNN (30) is linear, and \(\theta_{\text{NL}}:=[\text{col}(W_{1})^{T},B_{1}^{T},\ldots,\text{col}(W_{L})^ {T},B_{L}^{T}]^{T}\), such that \(\theta=[\theta_{\text{NL}}^{T},\theta_{\text{L}}^{T}]^{T}\). From (30), we observe that an equivalent of the physical model (17) is obtained by selecting \(\theta_{\text{L}}=\overline{\theta}_{\text{L}}:=[0,0,\theta_{\text{phy}}^{\ast ^{T}}]^{T}\). We rewrite the regularization term (26) into \[V_{\text{reg}}(\theta)=\left\|\begin{bmatrix}\Lambda_{\text{NL}}&\Lambda\\ \Lambda^{T}&\Lambda_{\text{L}}\end{bmatrix}\left(\begin{bmatrix}\theta_{\text{ NL}}\\ \theta_{\text{L}}\end{bmatrix}-\begin{bmatrix}0\\ \overline{\theta}_{\text{L}}\end{bmatrix}\right)\right\|_{2}^{2}, \tag{31}\] where \(\Lambda\), \(\Lambda_{\text{NL}}^{T}=\Lambda_{\text{NL}}\), and \(\Lambda_{\text{L}}^{T}=\Lambda_{\text{L}}\) are obtained by selecting rows and columns of \(\Lambda_{\text{NN}}\) and \(\Lambda_{\text{phy}}\) accordingly. Then, given any parameter set \(\theta_{\text{NL}}^{(j)}\), choose \(\theta_{\text{L}}^{(j)}\) according to \[\begin{split}\theta_{\text{L}}^{(j)}=& M(\theta_{\text{NL}}^{(j)})^{-1}\Bigg{(}\frac{1}{N}\sum_{i=0}^{N-1}u_{i}\phi_{ \text{L}}(\theta_{\text{NL}}^{(j)},\phi_{i})\\ &+\lambda\Big{(}\big{(}\Lambda_{\text{L}}^{2}+\Lambda^{T}\Lambda\big{)} \overline{\theta}_{\text{L}}-\big{(}\Lambda^{T}\Lambda_{\text{NL}}+\Lambda_{ \text{L}}\Lambda^{T}\big{)}\theta_{\text{NL}}^{(j)}\Big{)}\Bigg{)},\end{split} \tag{32}\] where \(M(\theta_{\text{NL}}^{(j)})\) is given as \[\begin{split} M(\theta_{\text{NL}}^{(j)}):=& \frac{1}{N}\sum_{i=0}^{N-1}\phi_{\text{L}}(\theta_{\text{NL}}^{(j)},\phi_{i}) \phi_{\text{L}}(\theta_{\text{NL}}^{(j)},\phi_{i})^{T}\\ &+\lambda\big{(}\Lambda_{\text{L}}^{2}+\Lambda^{T}\Lambda\big{)}. \end{split} \tag{33}\] Note that (32) is valid only if \(M(\theta_{\text{NL}}^{(j)})\) is nonsingular, which yields a unique solution. Nonsingularity of \(M(\theta_{\text{NL}}^{(j)})\) is also referred to as the persistence of excitation. **Proposition IV.2** (PGNN improves over physics): _Consider the PGNN (30) with \(\theta_{\text{NL}}^{(j)}\) given, e.g., initialized randomly for \(j=0\) or attained during training for \(j\neq 0\). Suppose that \(M(\theta_{\text{NL}}^{(j)})\) is nonsingular, and choose \(\theta_{\text{L}}^{(j)}\) according to (32). Then, for the cost function (25), we have_ \[V\left(\begin{bmatrix}\theta_{\text{NL}}^{(j)}\\ \theta_{\text{L}}^{(j)}\end{bmatrix},Z^{N}\right)\leq V\left(\begin{bmatrix} \theta_{\text{NL}}^{(j)}\\ \overline{\theta}_{\text{L}}\end{bmatrix},Z^{N}\right), \tag{34}\] _with strict inequality if and only if_ \[\begin{split}\frac{1}{N}\sum_{i=0}^{N-1}\Big{(}u_{i}\phi_{ \text{L}}&\big{(}\theta_{\text{NL}}^{(j)},\phi_{i}\big{)}-\phi_{\text{L}} \big{(}\theta_{\text{NL}}^{(j)},\phi_{i}\big{)}\phi_{\text{L}}\big{(}\theta_{ \text{NL}}^{(j)},\phi_{i}\big{)}^{T}\overline{\theta}_{\text{L}}\Big{)}\\ &-\lambda(\Lambda^{T}\Lambda_{\text{NL}}+\Lambda_{\text{L}}\Lambda^{T} )\theta_{\text{NL}}^{(j)}\neq 0.\end{split} \tag{35}\] _Moreover, if \(\Lambda_{\text{NN}}\) is such that \(\Lambda=0\), i.e., the cross terms between \(\theta_{\text{NL}}\) and \([W_{L+1}^{T},B_{L+1}]^{T}\) are not penalized, the PGNN achieves a better data fit compared to the physical model, i.e.,_ \[V_{\text{MSE}}\left(\begin{bmatrix}\theta_{\text{NL}}^{(j)}\\ \theta_{\text{L}}^{(j)}\end{bmatrix},Z^{N}\right)\leq V_{\text{MSE}}(\theta_{ \text{phy}}^{\ast},Z^{N}), \tag{36}\] _with strict inequality if (35) holds._ See Appendix II. Condition (35) is obtained by rewriting \(\theta_{\text{L}}^ between the output of the final hidden layer \(\alpha_{L}(\cdot)\) and the structural model errors present in the data \(g(\phi_{i})\). **Remark IV.3**: _Proposition IV.2 can be directly extended for PGNNs with physical models that are not LIP. To see this, rewrite the physical model_ \[f_{\text{phy}}\big{(}\theta_{\text{phy}},\phi(k)\big{)}=\theta_{\text{L-phy}}^{T} T_{\text{phy}}\big{(}\theta_{\text{NL-phy}},\phi(k)\big{)}, \tag{37}\] _and choose \(\theta_{\text{L}}^{(j)}=[\text{col}(W_{L+1}^{(j)})^{T},B_{L+1}^{(j)},{\theta_{ \text{L-phy}}^{(j)}}^{T}]^{T}\) according to (32) using \(\theta_{\text{NL}}^{(j)}=[\text{col}(W_{1}^{(j)})^{T},B_{1}^{(j)}{}^{T},..., \text{col}(W_{L}^{(j)})^{T},B_{L}^{(j)}{}^{T},\theta_{\text{NL-phy}}^{T}]^{T}\)._ In practice, the optimized parameter selection is used after training, i.e., update \(\theta_{\text{L}}^{(j)}\) for \(j=J\), during training for each \(j=\{0,...,J\}\), or as an initialization for \(j=0\). Note that, from (29) and Proposition IV.2, if (35) holds, we have that \[V(\hat{\theta},Z^{N})\leq V(\theta^{(0)},Z^{N})<V(\overline{\theta},Z^{N}), \tag{38}\] when (32) is used for initialization of \(\theta_{\text{L}}^{(0)}\). ### _Enhancing PGNN robustness via regularized training_ In general, there is no systematic method to validate Assumption IV.2 in practice. For this reason, [34] gives as a user guideline to sample the complete domain of interest, i.e., the operating conditions \(\mathcal{R}\). For reasons of time and safety, it can be infeasible to design an experiment that achieves this. Correspondingly, we require a form of graceful degradation for situations in which the PGNN is operated on conditions that were not present in the training data. Since there is no data available for these conditions, the physical model is the only source of information. Therefore, we aim to have the PGNN comply with the physical model via a regularization term in the cost function, such that (25) becomes \[V(\theta,Z^{N})=V_{\text{MSE}}(\theta,Z^{N})+\lambda V_{\text{reg}}(\theta)+ \gamma V_{\text{phy}}(\theta,Z^{E}). \tag{39}\] In (39), \(\gamma\in\mathbb{R}_{>0}\) is a regularization parameter, and the regularization cost is \[V_{\text{phy}}(\theta,Z^{E}):=\frac{1}{E}\sum_{i=0}^{E-1}\big{(}f_{\text{phy} }(\theta_{\text{phy}}^{*},\phi_{i}^{E})-\hat{u}(\theta,\phi_{i}^{E})\big{)}^{2}. \tag{40}\] The set \(Z^{E}=\{\phi_{0}^{E},\ldots,\phi_{E-1}^{E}\}\) contains user-defined regressor points for which we aim to have the PGNN comply with the physical model, e.g., due to the absence of reliable data in \(Z^{N}\). Thereby, the aim is to have \(Z^{N}\cup Z^{E}\) cover the operating conditions \(\mathcal{R}\) up to high accuracy, which is done automatically by Algorithm 1. In Algorithm 1, \(C(\zeta,Z^{N},Z^{E})\) is an objective function that specifies the goal of the optimization. For example, choosing \(\phi_{i}^{E}\) to maximize the minimum squared Euclidean distance with respect to all regressor points, is achieved by choosing \[C(\zeta,Z^{N},Z^{E}):=\min_{\phi\in Z^{N},Z^{E}}\left\|\phi-\zeta\right\|_{2}^ {2}. \tag{41}\] Algorithm 1 iterates until a stopping criterion is met, e.g., by fixing a maximum number of points, or a minimum threshold \(\epsilon\) for the objective function (41). **Remark IV.4**: _The results of Proposition IV.2 can be extended directly to the cost function (39) by appropriately revising the computation of \(\theta_{\text{L}}^{(j)}\) in (32), i.e., compute the least squares solution of (39) instead of (25)._ Consider a mechanical system as in (3), where the operating conditions \(\mathcal{R}\) are represented by a maximum position of \(0.15\)\(m\) and velocity of \(0.2\)\(\frac{m}{s}\), and neglect the acceleration for the sake of simplicity. Since it was deemed unsafe to travel the full stroke during the data generating experiment, the training data \(Z^{N}\) describes \(\mathcal{R}\) only in a limited range of the position, see the blue crosses in Fig. 4. Consequently, application of Algorithm 1 generates the set \(Z^{E}\), see the orange crosses in Fig. 4, such that \(Z^{N}\cup Z^{E}\) cover the complete operating conditions \(\mathcal{R}\). ``` Initialize\(Z^{E}=\{\}\), \(i=0\), while\(i<E-1\)\(\wedge\)\(C\big{(}\phi(i-1),Z^{N},Z^{E}\big{)}>\epsilon\)do \(\phi_{i}^{E}=\text{arg}\max_{\zeta\in\mathcal{R}}C\big{(}\zeta,Z^{N},Z^{E} \big{)}\), \(Z^{E}=Z^{E}\cup\phi_{i}^{E}\), \(i=i+1\). endwhile ``` **Algorithm 1** Design algorithm for \(Z^{E}\). ## V ISS of PGNN-based feedforward controllers Stability of a feedforward controller (16) is a prerequisite for safe operation of the closed-loop system, i.e., for bounded reference values \(r(k)\) the feedforward input \(u_{\text{ff}}(k)\) should remain bounded. For linear feedforward controllers, stability is tested by ensuring that the poles of the feedforward controller transfer function are within the unit circle. However, the nonlinear NN in the PGNN feedforward (16), (24) complicates the stability assessment. For the sake of presentation, we consider a PGNN (24) with linear physical model, i.e., \(f_{\text{phy}}\big{(}\theta_{\text{phy}},\phi(k)\big{)}=\theta_{\text{phy}}^{T} \phi(k)\), and highlight some further generalizations. Next, to facilitate the stability analysis of PGNN-based feedforward controllers, Fig. 4: Two–dimensional illustration of the regressor points in the CLM data set \(Z^{N}\) discussed in Section VI, and the regressor points \(Z^{E}\) designed following Algorithm 1. we define \[\hat{a}:=[\hat{a}_{1},\ldots,\hat{a}_{n_{a}+1}]^{T},\quad\hat{b}:=[\hat{b}_{1}, \ldots,\hat{b}_{n_{b}-1}]^{T}, \tag{42}\] such that \(\hat{\theta}_{\text{phy}}=[\hat{a}^{T},\hat{b}^{T}]^{T}\). Moreover, we define \[\phi_{r}(k) :=[r(k+n_{k}+1),\ldots,r(k+n_{k}-n_{a}+1)]^{T},\] \[\phi_{u_{\text{ff}}}(k) :=[u_{\text{ff}}(k-1),\ldots,u_{\text{ff}}(k-n_{b}+1)]^{T}, \tag{43}\] such that \(\phi_{\text{ff}}(k)=[\phi_{r}(k)^{T},\phi_{u_{\text{ff}}}(k)^{T}]^{T}\). Note that, for a linear feedforward controller (8), stability is determined only by \(\hat{b}\). Then, by choosing \(x(k):=\phi_{u_{\text{ff}}}(k)\), we obtain a state-space representation of the PGNN (24) as \[x(k+1)= A(\hat{b})x(k) \tag{44}\] \[+B\left(\hat{a}^{T}\phi_{r}(k)+f_{\text{NN}}\left(\hat{\theta}_{ \text{NN}},\begin{bmatrix}\phi_{r}(k)\\ x(k)\end{bmatrix}\right)\right),\] where \(B=[1,O]^{T}\in\mathbb{R}^{n_{b}-1}\), \(A(\hat{b})=\begin{bmatrix}\hat{b}^{T}\\ I\quad O\end{bmatrix}\in\mathbb{R}^{(n_{b}-1)\times(n_{b}-1)}\), and \(\phi_{r}(k)\) is the external input. Because the state-space PGNN feedforward (44) has \(\phi_{r}(k)\) as an exogeneous input, it makes sense to pursue ISS guarantees [35], which is a stronger property than nominal stability and guarantees boundedness of the state \(x(k)\) for any bounded reference \(\phi_{r}(k)\). First, we provide a general ISS result for the PGNN state-space representation (44). Afterwards, we show how ISS of the resulting PGNN feedforward controllers can be imposed a priori during the PGNN training by adjusting the identification criterion. We recall the following notions: a function \(\kappa:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}\) is a \(\mathcal{K}\)-function if it is continuous, strictly increasing and \(\kappa(0)=0\). It is a \(\mathcal{K}_{\infty}\)-function if it is a \(\mathcal{K}\)-function and \(\kappa(s)\rightarrow\infty\) as \(s\rightarrow\infty\). A function \(\eta:\mathbb{R}_{\geq 0}\times\mathbb{R}_{\geq 0}\) is a \(\mathcal{KL}\)-function if, for each fixed \(k\geq 0\), the function \(\eta(\cdot,k)\) is a \(\mathcal{K}\)-function and for each fixed \(s\geq 0\), the function \(\eta(s,\cdot)\) is decreasing and \(\eta(s,k)\to 0\) as \(k\rightarrow\infty\). **Definition V.1** (Iss [35]): _A system \(x(k+1)=f\big{(}x(k),\phi_{r}(k+1)\big{)}\) is ISS if there exist a \(\mathcal{KL}\)-function \(\eta:\mathbb{R}_{\geq 0}\times\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{ \geq 0}\) and a \(\mathcal{K}\)-function \(\kappa\) such that, for each finite input sequence \(\phi_{r}\) and each initial state \(x(0)=\xi\in\mathbb{R}^{n_{b}-1}\), it holds that_ \[|x(k)|\leq\eta\big{(}|\xi|,k\big{)}+\kappa\big{(}\|\phi_{r}\|\big{)} \tag{45}\] _for each \(k\in\mathbb{Z}_{+}\)._ **Remark V.1**: _If the state-space PGNN representation (44) is ISS with respect to the external input \(\phi_{r}(k+1)\), we have that:_ 1. \(x(k)\) _remains bounded for bounded_ \(\phi_{r}(k)\) _and, consequently,_ \(u_{\text{ff}}(k)\) _remains bounded;_ 2. \(x(k)\to 0\) _for_ \(\phi_{r}(k)\to 0\) _and, consequently,_ \(u_{\text{ff}}(k)\to 0\)_._ In order to analyze ISS of the PGNN feedforward controller (44), we will use a continuous ISS-Lyapunov function, which implies ISS as shown in [35]. **Definition V.2** (Iss-Lyapunov [35]): _A continuous function \(V:\mathbb{R}^{n}\rightarrow\mathbb{R}_{\geq 0}\) is called an ISS-Lyapunov function for a discrete-time state-space system \(x(k+1)=f\big{(}x(k),\phi_{r}(k)\big{)}\) if the following conditions hold:_ 1. _There exists_ \(\mathcal{K}_{\infty}\)_-functions_ \(\kappa_{1}\)_,_ \(\kappa_{2}\) _such that_ \[\kappa_{1}\big{(}|x(k)|\big{)}\leq V\big{(}x(k)\big{)}\leq\kappa_{2}\big{(}|x( k)|\big{)},\forall\quad x(k)\in\mathbb{R}^{n_{b}-1}.\] (46) 2. _There exists a_ \(\mathcal{K}_{\infty}\)_-function_ \(\kappa_{3}\) _and a_ \(\mathcal{K}\)_-function_ \(\sigma\)_, such that_ \[V\big{(}x(k+1)\big{)}-V\big{(}x(k)\big{)}\leq-\kappa_{3}(|x(k)|)+\sigma(|\phi_ {r}(k+1)|),\] (47) \[\forall\;x(k)\in\mathbb{R}^{n_{b}-1},\;\forall\;\phi_{r}(k)\in \mathbb{R}^{n_{b}+1}.\] **Remark V.2**: _For all commonly applied activation functions, the NN (18) is globally Lipschitz, i.e., there exists an \(L:=[L_{r}^{T},L_{u_{\text{ff}}}^{T}]^{T}\) such that_ \[\left|f_{\text{NN}}\big{(}\hat{\theta}_{\text{NN}},\phi_{\text{ ff}}^{A}(k)\big{)}-f_{\text{NN}}\big{(}\hat{\theta}_{\text{NN}},\phi_{\text{ff}}^{B}(k) \big{)}\right| \tag{48}\] \[\leq L^{T}\left|\phi_{\text{ff}}^{A}(k)-\phi_{\text{ff}}^{B}(k) \right|.\] _Using backpropagation, it is possible to find values for \(L\), e.g.,_ \[L^{T} =\max_{\phi_{\text{ff}}(k)}\frac{\partial f_{\text{NN}}\big{(} \hat{\theta}_{\text{NN}},\phi_{\text{ff}}(k)\big{)}}{\partial\phi_{\text{ff}}(k)} \tag{49}\] \[=\max_{\phi_{\text{ff}}(k)}\hat{W}_{l+1}\text{diag}(\alpha_{l}^{ \prime})\ldots\hat{W}_{2}\text{diag}(\alpha_{1}^{\prime})\hat{W}_{1}\leq\Pi_{i=1} ^{l+1}|\hat{W}_{i}|,\] _where \(\alpha_{l}^{\prime}=\frac{\partial\alpha_{l}(x)}{\partial x}\big{|}_{x=x_{l}}\), with \(x_{l}\) the input to NN layer \(l\). The upperbound in (49) is obtained by substitution of \(\text{diag}(\alpha_{l}^{\prime})=I\), which holds for \(\tanh\), ReLU, and several other activation functions._ **Assumption V.1**: _The origin \(x(k)=0\) is an equilibrium for the unexcited PGNN (44), i.e., \(\phi_{r}(k)=0\), such that (44) gives_ \[f_{\text{NN}}\big{(}\hat{\theta}_{\text{NN}},0\big{)}=0. \tag{50}\] _Note that, we can introduce a coordinate transformation, e.g., \(\zeta(k)=x(k)+\varepsilon\) to satisfy Assumption V.1._ **Assumption V.2**: _There exists a \(P\succ 0\) such that \(Q:=P-A(\hat{b})^{T}PA(\hat{b})\succ 0\)._ Assumption V.2 requires that the parameters \(\hat{b}\) are such that the physical model is stable, i.e., \(A(\hat{b})\) is a Schur matrix. A \((P,Q)\) pair satisfying Assumption V.2 is typically obtained by choosing \(Q\succ 0\) and solving the discrete-time Lyapunov equation to obtain \(P\succ 0\). We denote \(\lambda_{\text{min}}(Q)\) and \(\lambda_{\text{max}}(Q)\) as the smallest and largest eigenvalue of \(Q\), respectively. **Theorem V.1** (PGNN feedforward ISS): _Consider the PGNN feedforward (16), (24) with linear physical model, and its state-space representation (44). Suppose that Assumptions V.1 and V.2 hold. Let \((P,Q)\) satisfy Assumption V.2, and define_ \[c_{\beta}:=B^{T}P\left(I+\frac{1}{\beta\lambda_{\text{min}}(Q)}AA^{T}P\right)B. \tag{51}\] _Then, if there exists a \(\beta>0\) such that_ \[L_{u_{\text{ff}}}^{T}L_{u_{\text{ff}}}<\frac{(1-\beta)\lambda_{\text{min}}(Q)}{c_ {\beta}}, \tag{52}\] the PGNN feedforward state-space representation (44) is ISS. See Appendix III. **Remark V.3**: _Most NN stability research is conducted for systems operating in closed-loop with a NN-based feedback controller, see, e.g., [24, 36] which provide local stability conditions via linearization and integral quadratic constraints, respectively. A global ISS result for a black box NN with no physical model embedded is presented in [23]._ **Remark V.4**: _The value of \(\beta\) for which the right-hand side in (52) is maximal, is_ \[\begin{split}\beta=&\frac{1}{\lambda_{\text{ min}}(Q)B^{T}PB}\Big{(}-B^{T} PAA^{T}PB+\\ &\sqrt{B^{T}P\big{(}\lambda_{\text{min}}(Q)I+AA^{T}P\big{)}BB^{T} PAA^{T}PB}\Big{)}.\end{split} \tag{53}\] Note that, since \(B\) is a column, the square root and division are scalar operations. Eq. (53) is obtained by setting the derivative w.r.t. \(\beta\) of the right hand side of (52) equal to zero, and choosing the option for which \(\beta>0\). **Remark V.5**: _A similar result can be obtained for a PGNN with a general nonlinear physical model, when a quadratic Lyapunov function is available for the physical model._ The ISS condition (52) in Theorem V.1 can be validated _after_ training by using \(\beta\) in (53), the upperbound \(L_{u_{\text{ff}}}=\begin{bmatrix}O,&I\end{bmatrix}L\) in (49) for some \((P,Q)\) pair. Recall that \(\theta=[\theta_{\text{NN}}^{T},\theta_{\text{phy}}^{T}]^{T}\), \(\theta_{\text{phy}}=[a^{T},b^{T}]^{T}\) for the linear physical model, and \(\theta_{\text{phy}}^{*}\) are the parameters \(\theta_{\text{phy}}\) identified using only the physical model (17) with MSE cost function (15). With the aim to ensure _before_ training that the PGNN is ISS, we fix \(\hat{b}=b^{*}\), and constrain the network weights to satisfy the ISS condition (52). **Lemma V.1** (Training imposed ISS - stable physical model): _Consider the PGNN feedforward controller with linear physical model, such that it admits a state-space representation of the form (44). Suppose that Assumption V.1 holds, that a \((P,Q)\) pair satisfying Assumption V.2 is available, and choose \(\beta\) as in (53). Define the set_ \[\begin{split}\Theta&:=\Bigg{\{}\theta\in\mathbb{R}^{n _{\theta}}\;\bigg{|}\;\big{(}b=b^{*}\big{)}\;\wedge\\ &\qquad\left(\left\|\;\big{(}\Pi_{l=1}^{L}|W_{l}|\big{)}\;\begin{bmatrix} 0\\ I\end{bmatrix}\;\right\|_{2}^{2}<\frac{(1-\beta)\lambda_{\text{min}}(Q)}{c_{ \beta}}\right)\Bigg{\}},\end{split} \tag{54}\] _and train the PGNN according to identification criterion_ \[\hat{\theta}=\text{arg}\min_{\theta\in\Theta}\;V\big{(}\theta,Z^{N}\big{)}. \tag{55}\] _Then, the training returns an ISS PGNN feedforward controller (16), (24)._ See Appendix IV. Assumption V.2 is violated when \(A(\hat{b})\) is not Schur, i.e., the system is nonminimum phase. There are two main approaches to obtain a stable feedforward controller for linear systems, see, e.g., [12]: 1. Non-causal feedforward controller design; 2. Stable approximate inversion techniques, such as ZPETC, ZMETC, NPZ-ignore. In the remainder of this section, we demonstrate approaches to assess and impose ISS of PGNN feedforward controllers for nonminimum phase systems. The nonminimum phase behaviour can be dealt with by extending the preview window of the PGNN, i.e., we adjust the PGNN (24) to \[\begin{split}\hat{u}\big{(}\theta,\tilde{\phi}(k)\big{)}=[a^{T},b^{T}]\begin{bmatrix}\tilde{\phi}_{y}(k)\\ \tilde{\phi}_{u}(k)\end{bmatrix}+f_{\text{NN}}\left(\theta_{\text{NN}}, \begin{bmatrix}\tilde{\phi}_{y}(k)\\ \tilde{\phi}_{u}(k)\end{bmatrix}\right),\\ \tilde{\phi}_{y}(k):=[r(k+n_{k}+n_{\text{pw}}+1),...,r(k+n_{k}-n_{a}+1)]^{T},\\ \tilde{\phi}_{u}(k):=[u(k-1),...,u(k-n_{b}+n_{\text{us}}+1)]^{T}, \end{split} \tag{56}\] where \(n_{\text{us}}\in\mathbb{Z}_{>0}\) are the number of unstable eigenvalues of \(A(\hat{b})\) of the original PGNN (24), and \(n_{\text{pw}}\in\mathbb{Z}_{>0}\) is the number of extended preview samples. The non-causal PGNN (56) is then trained following Lemma V.1. **Remark V.6**: _In order to satisfy Assumption V.2, the parameters \(b^{*}\) should be such that \(A(b^{*})\) is Schur. This is achieved either by choosing \(n_{\text{pw}}\) large enough, such that the identification of \(\theta_{\text{phy}}^{*}=[a^{*T},b^{*T}]^{T}\) returns stable parameters \(b^{*}\), or by fixing \(b^{*}\) to retain the stable eigenvalues of the original \(A(b^{*})\)._ As an alternative approach, it is possible to employ linear stable approximation inversion tools on the linear part of the dynamics. In order to see this, we rewrite the state-space PGNN (44) into \[u_{\text{ff}}(k)=G(\hat{b},q^{-1})\left(\hat{a}^{T}\phi_{r}(k)+f_{\text{NN}} \left(\hat{\theta},\begin{bmatrix}\phi_{r}(k)\\ \phi_{u_{\text{ff}}}(k)\end{bmatrix}\right)\right), \tag{57}\] where \(G(\hat{b},q^{-1}):=\frac{1}{1-\sum_{i=1}^{n_{b}}b_{i}q^{-i}}\). Let \(\tilde{G}(\tilde{b},q^{-1}):=\frac{\sum_{i=0}^{n_{b}}\tilde{b}_{i}^{T}q^{-i}}{1- \sum_{i=1}^{n_{b}^{\text{ff}}}b_{i}^{T}q^{-i}}\) be a stable approximate of \(G(\hat{b},q^{-1})\) obtained via a user-preferred method which gives \(\tilde{n}_{b}^{n},\tilde{n}_{b}^{n}\in\mathbb{Z}_{\geq 0}\) and \(\tilde{b}_{i}^{n},\tilde{b}_{i}^{d}\in\mathbb{R}\). Then, a PGNN with stable physical model is obtained from (57) as \[u_{\text{ff}}(k)=\tilde{G}(\tilde{b},q^{-1})\left(\hat{a}^{T}\phi_{r}(k)+f_{ \text{NN}}\left(\hat{\theta},\begin{bmatrix}\phi_{r}(k)\\ \phi_{u_{\text{ff}}}(k)\end{bmatrix}\right)\right). \tag{58}\] If the stable approximation method creates terms with \(q^{d}\), \(d\in\mathbb{Z}_{>0}\) in the numerator of \(G(\tilde{b},q^{-1})\) in (58), \(\phi_{u_{\text{ff}}}(k)\) in the NN can only contain feedforward inputs up until \(u_{\text{ff}}(k-d-1)\) in order to compute the feedforward according (58). ISS of the stable approximated PGNN (58) is imposed by training the unstable PGNN (57) according a revised version of Lemma V.1. To do so, we rewrite the stably inverted PGNN (58) into state-space form using \(x(k):=[u_{\text{ff}}(k),...,u_{\text{ff}}(k-n_{b}-\tilde{n}_{b}^{n}+1)]^{T}\), and use the triangular inequality with Lipschitz bound (48) to construct \(L_{r+}\) and \(L_{x}\) such that \[\left|\sum_{i=0}^{\tilde{n}_{b}^{n}}\tilde{b}_{i}^{n}f_{\text{NN}}\big{(}\hat{ \theta}_{\text{NN}},\phi_{\text{ff}}(k-i)\big{)}\right|\leq[L_{r+}^{T},L_{x}^{T}] \begin{bmatrix}\phi_{r+}(k)\\ x(k)\end{bmatrix}, \tag{59}\] with \(\phi_{r+}(k)=[r(k+n_{k}+1),...,r(k+n_{k}-n_{a}-\tilde{n}_{b}^{n}+1)]^{T}\). **Lemma V.2** (Training imposed ISS - unstable physical model): _Consider the PGNN (24) with linear physical model. Suppose that Assumption V.1 holds, and choose \(\beta\) as in (53). Define_ \[\begin{split}\Theta:=\Bigg{\{}&\theta\in\mathbb{R} ^{n_{\theta}}\Biggm{|}\big{(}b=b^{*}\big{)}\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad 3. _PGNN-based feedforward with graceful degradation_ with NN dimensions \(n_{1}=16\), \(l=1\) (18), using physical model (61) identified according to (14), (39) with \(\lambda=0.07\), \(\Lambda_{\text{phy}}=\Lambda_{\text{NN}}=I\), and \(\gamma=0.1\). _Results:_ Fig. 7 the tracking error \(e(k)\) resulting achieved by the different feedforward controllers for \(r_{1}(k)\) in Fig. 6. In order to test robustness of the PGNN, we compute the mean-absolute error (MAE) \[MAE=\frac{1}{N}\sum_{k=0}^{N-1}|e(k)|, \tag{63}\] for two references \(r_{1}(k)\) and \(r_{2}(k)\) in Fig. 6 using different velocities \(\max\big{(}\dot{r}_{i}(k)\big{)}=n\,0.025\ \frac{m}{s}\), \(i=\{1,2\}\), \(n=\{1,...,6\}\). Note that, since \(\max\big{(}r_{2}(k)\big{)}=0.15\ m\), the feedforward requires to extrapolate for \(r_{2}(k)\), see Fig. 4. Fig. 8 displays the resulting MAE values, and Table I lists the MAE for \(r_{1}(k)\). For \(r_{1}(k)\), both the PGNNs outperform the physics-based feedforward in terms of the MAE for all tested velocities. A significant drop in performance is observed when using the PGNN with \(\gamma=0\) on reference \(r_{2}(k)\), which is caused by the PGNN extrapolating badly for \(r(k)>0.1\ m\). The physics-based regularization term (40) prevents this, as seen by the MAE of the PGNN with \(\gamma=0.1\). ### _Nonminimum phase rotating-translating mass_ _System dynamics:_ we consider a translating-rotating mass with force input \(u(k)\) and position output \(y(k)\) at opposite sides of the centre of mass, see Fig. 9. The dynamics are given as \[\begin{split} y(t)=&\left(\frac{1}{ms^{2}+f_{v}s}- \frac{l_{y}^{2}}{Ms^{2}+2l_{x}ds+2l_{x}k}\right)\\ &\left(u(t)-g\big{(}y(t)\big{)}\right),\\ g\big{(}y(t)\big{)}=& c\sin\left(\frac{2\pi}{l_{m}}y(t )\right),\end{split} \tag{64}\] where \(l_{x},l_{y}\in\mathbb{R}_{\geq 0}\) are the width and height of the mass \(m\in\mathbb{R}_{>0}\), \(M=\frac{1}{5}m(l_{x}^{2}+l_{y}^{2})\) is the moment of intertia, \(f_{v}\in\mathbb{R}_{>0}\) the viscous friction, and \(d,k\in\mathbb{R}_{>0}\) the damping and spring constant counteracting rotation at both ends of the mass. The function \(g\big{(}y(t)\big{)}\) is the cogging force and is assumed _unknown_, with \(l_{m}\in\mathbb{R}_{>0}\) the magnet pole pitch and \(c\in\mathbb{R}_{>0}\) the cogging magnitude. Parameter values are listed in Table II. The system (64) is controlled in closed-loop by a ZOH-discretized version of \[C(s)=5\cdot 10^{3}\frac{s+4\pi}{s+20\pi}, \tag{65}\] which achieves a \(1.22\ Hz\) bandwidth. _Training data generation:_ data is generated in closed-loop by sampling \(u(k)\) and \(y(k)\) at a frequency of \(1\ kHz\), while exciting the system with the reference \(r(k)\) in the top window of Fig. 10, and adding a white noise with variance \(50\ N^{2}\) to the input \(u(k)\). _Feedforward controllers:_ ZOH discretization of the transfer function in (64) gives \(n_{a}=n_{b}=4\), \(n_{k}=0\). The identified feedforward controllers are _unstable_, due to the nonminimum phase transfer function in the system (64). We test the following situations: 1. _No feedforward_, i.e., \(u_{\text{ff}}(k)=0\); 2. _Physics-based feedforward_ with ZPETC stable in \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \(m\) & \(l_{x},l_{y}\) & \(M\) & \(f_{v}\) & \(k\) & \(d\) & \(l_{m}\) & \(c\) \\ \hline \hline \(20\) & \(1\) & \(\frac{40}{3}\) & \(50\) & \(25\cdot 10^{3}\) & \(\frac{575}{3}\) & \(0.05\) & \(1\) \\ \hline \(kg\) & \(m\) & \(kpm^{2}\) & \(\frac{kg}{s}\) & \(\frac{kg}{s}\) & \(\frac{kg}{s}\) & \(m\) & \(\frac{kgm}{s}\) \\ \hline \end{tabular} \end{table} TABLE II: Parameter values of the rotating–translating mass displayed in Fig. 9. Fig. 6: References \(r_{1}(k)\) and \(r_{2}(k)\) (top window), and the generated feedforward input for \(r_{1}(k)\) using PGNNs with \(\lambda=0\) (middle window) and \(\lambda=0.07\) (bottom window). Fig. 7: Tracking error for reference \(r_{1}(k)\) resulting from the different feedforward controllers. version, i.e., (16), (17) with linear physical model \(f_{\text{phy}}\big{(}b_{\text{phy}},\phi(k)\big{)}=\theta_{\text{phy}}^{T}\phi(k)\) and parameters identified according to (14) with MSE cost function (15); 3. _PGNN-based feedforward_ with NN dimensions \(n_{1}=16\) and \(l=1\), using ZPETC stable inversion, i.e., (58) identified according to Lemma 5.2 which guarantees that the feedforward control input remains bounded; 4. _Physics-based feedforward_ with extended preview, i.e., (16), (17) with linear physical model \(f_{\text{phy}}\big{(}\theta_{\text{phy}},\tilde{\phi}(k)\big{)}=\theta_{\text{ phy}}^{T}\tilde{\phi}(k)\), \(n_{\text{pw}}=20\), and parameters identified according to (14) with MSE cost function (15); 5. _PGNN-based feedforward_ with NN dimensions \(n_{1}=16\) and \(l=1\) and extended preview window, i.e., (56) with \(n_{\text{pw}}=20\) identified according to Lemma 5.1. We choose \(\tilde{Q}=Q=I\) to find \(\Theta\) in (54) for the extended preview PGNN (56) or in (60) for the stably inverted PGNN (58). Consider Lemma 5.2 as an example. We find \(P\) by solving \(A(\tilde{b}^{d})^{T}PA(\tilde{b}^{d})-P+\tilde{Q}=0\), which gives \(\beta=0.4982\) from (53). Consequently, we use \(\tilde{b}^{n}\) in (59) obtained from ZPETC approximation and combine it with (60), to obtain \(\|L_{x}\|_{2}^{2}\leq\|403L_{\text{\tiny{up}}}\|_{2}^{2}<1.96\cdot 10^{-4}\), which directly translates into NN parameter constraints when using (49). _Results:_ Fig. 10 shows the tracking error resulting from the aforementioned feedforward controllers for the reference shown in the top window. It is clear that the PGNN manages to significantly outperform the physics-based feedforward controller in terms of tracking error, as is confirmed by the mean-squared error (MSE) values listed in Table III. ## VII Conclusions In this paper, we presented a generalized framework for nonlinear feedforward control using physics-guided neural networks. It was demonstrated that, under suitable assumptions, the PGNNs can recover the inverse system dynamics exactly. Moreover, it was shown that the use of physical knowledge within the PGNN framework enabled tools to ensure improved training convergence with respect to the physical model, as well as providing a form of graceful degradation. A systematic method for analyzing input-to-state stability of PGNN-based feedforward controllers was provided, based on which the identification criterion was revised to impose ISS before training. The PGNN feedforward showed superior performance compared to physics-based feedforward controllers on a real-life industrial linear motor and a nonminimum phase simulation example. This shows that indeed, the developed PGNN-based feedforward controller design methodology offers a systematic and reliable solution for increasing performance of mechatronic systems with unknown, complex nonlinearities and parasitic effects. Fig. 8: MAE of the tracking errors for \(r_{1}(k)\) and \(r_{2}(k)\) displayed in Fig. 7 with different velocities \(\max\big{(}\dot{r}(k)\big{)}\). Fig. 10: Tracking error resulting from different feedforward controllers. \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \cline{2-7} \multicolumn{1}{c|}{**No FF**} & \multicolumn{1}{c|}{**Un.-2Perc**} & \multicolumn{1}{c|}{**pons-2Perc**} & \multicolumn{1}{c|}{**Un.-PW**} & \multicolumn{1}{c|}{**pons-PW**} \\ \hline \hline **MSE** & \(4.50\cdot 10^{-5}\) & \(1.44\cdot 10^{-7}\) & \(2.10\cdot 10^{-8}\) & \(1.59\cdot 10^{-7}\) & \(4.74\cdot 10^{-4}\) \\ \hline \end{tabular} \end{table} TABLE III: MSE in \([m^{2}]\) of the tracking errors in Fig. 10. Fig. 9: Rotating–translating mass with actuation and sensing on opposite sides of the centre of mass. ## Appendix I Proof of Proposition 4.1 Substitution of the inverse dynamics (20) and PGNN (24) into the cost function (25) with \(\Lambda_{\text{NN}}=0\) gives \[\frac{1}{N} \sum_{i=0}^{N-1}\Big{(}f_{\text{phy}}(\theta_{\text{phy}}^{*},\phi _{i})+g(\phi_{i})-f_{\text{phy}}(\theta_{\text{phy}},\phi_{i}) \tag{66}\] \[-f_{\text{NN}}(\theta_{\text{NN}},\phi_{i})\Big{)}^{2}+\lambda \left\|\begin{bmatrix}0&0\\ 0&\Lambda_{\text{phy}}\end{bmatrix}\left(\theta-\begin{bmatrix}0\\ \theta_{\text{phy}}^{*}\end{bmatrix}\right)\right\|_{2}^{2}.\] Both terms in (66) are non-negative, such that the global minimum is attained for \(\hat{\theta}_{\text{phy}}=\theta_{\text{phy}}^{*}\) (regularization term) and \(\hat{\theta}_{\text{NN}}=\theta_{\text{NN}}^{*}\) (MSE term, after substitution of \(\theta_{\text{phy}}=\theta_{\text{phy}}^{*}\)). ## Appendix II Proof of Proposition 4.2 Proof:: The proof of (34) follows directly by observing that \(\theta_{\text{L}}^{(j)}\) in (32) is the least squares solution of (14), (25) for the PGNN (30) given \(\theta_{\text{NL}}^{(j)}\). Since \(M(\theta_{\text{NL}}^{(j)})\) is non-singular, \(\theta_{\text{L}}^{(j)}\) is unique, such that (34) holds with strict inequality if and only if \(\theta_{\text{L}}^{(j)}\neq\overline{\theta}_{\text{L}}\). Observe that, \(\theta_{\text{L}}^{(j)}-\overline{\theta}_{\text{L}}=\theta_{\text{L}}^{(j)}- M\big{(}\theta_{\text{NL}}^{(j)}\big{)}^{-1}M\big{(}\theta_{\text{NL}}^{(j)} \big{)}\overline{\theta}_{\text{L}}\neq 0\) gives condition (35) after substitution of (32) and (33) and using nonsingularity of \(M(\theta_{\text{NL}}^{(j)})\). Secondly, (36) follows by observing that choosing \(\theta_{\text{L}}^{(j)}\neq\overline{\theta}_{\text{L}}\) cannot decrease \(V_{\text{reg}}\) when \(\Lambda=0\). Correspondingly, (34) states that \(V_{\text{MSE}}\) must decrease if (35) holds, such that \[V_{\text{MSE}}\left(\begin{bmatrix}\theta_{\text{NL}}^{(j)}\\ \theta_{\text{L}}^{(j)}\end{bmatrix},Z^{N}\right) \leq V\left(\begin{bmatrix}\theta_{\text{NL}}^{(j)}\\ \overline{\theta}_{\text{L}}\end{bmatrix},Z^{N}\right) \tag{67}\] \[=V_{\text{MSE}}(\theta_{\text{phy}}^{*},Z^{N}),\] and with strict inequality if (35) holds. ## Appendix III Proof of Theorem 5.1 Proof:: The proof follows by showing that \(V\big{(}x(k)\big{)}=x(k)^{T}Px(k)\) is an ISS-Lyapunov function as in Definition 5.2. Condition (46) is satisfied with \(\kappa_{1}\big{(}x(k)\big{)}=\lambda_{\text{min}}(P)x(k)^{T}x(k)\) and \(\kappa_{2}\big{(}x(k)\big{)}=\lambda_{\text{max}}(P)x(k)^{T}x(k)\). We compute the difference \(V\big{(}x(k+1)\big{)}-V\big{(}x(k)\big{)}\) to obtain \[V\big{(}x(k+1)\big{)}-V\big{(}x(k)\big{)}=-x(k)^{T}Qx(k) \tag{68}\] \[\qquad\qquad+2C_{1}B^{T}PAx(k)+B^{T}PBC_{1}^{2},\] \[C_{1}:=\hat{a}^{T}\phi_{r}(k+1)+f_{\text{NN}}\left(\hat{\theta}_ {\text{NN}},\begin{bmatrix}\phi_{r}(k+1)\\ x(k)\end{bmatrix}\right).\] For a term \(2p^{T}q\) and a \(\varepsilon\in\mathbb{R}_{>0}\), we can complete the squares as: \[2p^{T}q=\varepsilon p^{T}p-\varepsilon(p-\frac{1}{\varepsilon}q)^{T}(p-\frac{ 1}{\varepsilon}q)+\frac{1}{\varepsilon}q^{T}q\leq\varepsilon p^{T}p+\frac{1}{ \varepsilon}q^{T}q. \tag{69}\] We complete the squares (69) of \(2C_{1}B^{T}PAx(k)\) in (68) using \(\varepsilon=\beta\lambda_{\text{min}}(Q)\), \(\beta\in\mathbb{R}_{>0}\), \(p=x(k)\), and \(q=A^{T}PBC_{1}\) to obtain \[V\big{(}x(k+1)\big{)}-V\big{(}x(k)\big{)}\leq- (1-\beta)\lambda_{\text{min}}(Q)x(k)^{T}x(k) \tag{70}\] \[+c_{\beta}C_{1}^{2},\] with \(c_{\beta}:=B^{T}PB\left(I+\frac{1}{\beta\lambda_{\text{min}}(Q)}AA^{T}P\right)B\). Similarly, by substituting \(C_{1}\) in (70) and completing the squares (69) for \(2\hat{a}^{T}\phi_{r}(k)f_{\text{NN}}\) using \(\varepsilon=\beta_{1}\), \(\beta_{1}\in\mathbb{R}_{>0}\), \(p=f_{\text{NN}}\) and \(q=\hat{a}^{T}\phi_{r}(k+1)\), we obtain \[V\big{(}x(k+1)\big{)}-V\big{(}x(k)\big{)} \leq-(1-\beta)\lambda_{\text{min}}(Q)x(k)^{T}x(k) \tag{71}\] \[+c_{\beta}(1+\beta_{1})f_{\text{NN}}\left(\hat{\theta}_{\text{NN} },\begin{bmatrix}\phi_{r}(k+1)\\ x(k)\end{bmatrix}\right)^{2}\] \[+c_{\beta}(1+\frac{1}{\beta_{1}})\big{(}\hat{a}^{T}\phi_{r}(k+1) \big{)}^{2},\] with \(\beta_{1}\in\mathbb{R}_{>0}\). Finally, substitution of the Lipschitz condition (48) with \(\phi_{r}^{B}(k)=0\) and (50), and completing the squares (69) for \(2\hat{L}_{uq}^{T}x(k)L_{r}^{T}\phi_{r}(k+1)\) using \(\varepsilon=\beta_{2}\), \(\beta_{2}\in\mathbb{R}_{>0}\), \(p=L_{uq}^{T}x(k)\), and \(q=L_{r}^{T}\phi_{r}(k+1)\), gives \[V\big{(}x(k+1)\big{)}-V\big{(}x(k)\big{)} \leq-x(k)^{T}\Big{(}(1-\beta)\lambda_{\text{min}}(Q)-c_{\beta}. \tag{72}\] \[(1+\beta_{1})(1+\beta_{2})L_{uq}L_{u_{\sharp}}^{T}\Big{)}x(k)+ \phi_{r}(k+1)^{T}c_{\beta}\cdot\] \[\Big{(}(1+\beta_{1})(1+\frac{1}{\beta_{2}})L_{r}L_{r}^{T}+(1+\frac {1}{\beta_{1}})\hat{a}\hat{a}^{T}\Big{)}\phi_{r}(k+1)\] \[=:-\kappa_{3}\big{(}|x(k)|\big{)}+\sigma\big{(}|\phi_{r}(k+1)| \big{)}.\] It is clear that \(\kappa_{3}\big{(}|x(k)|\big{)}\) is a \(\mathcal{K}_{\infty}\)-function if the matrix between \(x(k)^{T}\) and \(x(k)\) is positive definite, which reduces to the scalar condition (52) when using the Cauchy-Schwartz inequality, i.e., \(\big{(}L_{uq}^{T}x(k)\big{)}^{2}=x(k)^{T}L_{uq}L_{uq}^{T}x(k)\leq L_{uq}^{T}L_{uq} x(k)^{T}x(k)\), and choosing \(\beta_{1}\), \(\beta_{2}\) arbitrary small. ## Appendix IV Proof of Lemma 5.1 Proof:: The proof follows directly from Proposition 5.1 and observing that limiting \(\theta\in\Theta\) ensures that (52) is satisfied by using the NN Lipschitz bound in (49). ## Appendix V Proof of Lemma 5.2 Proof:: The proof follows by rewriting the stable inverted PGNN (58) into state-space form, and following the proof of Proposition 5.1 with the Lipschitz bound in (59). A \((P,\tilde{Q})\) pair exists, since the stable inversion of \(G(\tilde{b},q^{-1})\) gives a Schur \(A(\tilde{b})\).
2301.02723
CFG2VEC: Hierarchical Graph Neural Network for Cross-Architectural Software Reverse Engineering
Mission-critical embedded software is critical to our society's infrastructure but can be subject to new security vulnerabilities as technology advances. When security issues arise, Reverse Engineers (REs) use Software Reverse Engineering (SRE) tools to analyze vulnerable binaries. However, existing tools have limited support, and REs undergo a time-consuming, costly, and error-prone process that requires experience and expertise to understand the behaviors of software and vulnerabilities. To improve these tools, we propose $\textit{cfg2vec}$, a Hierarchical Graph Neural Network (GNN) based approach. To represent binary, we propose a novel Graph-of-Graph (GoG) representation, combining the information of control-flow and function-call graphs. Our $\textit{cfg2vec}$ learns how to represent each binary function compiled from various CPU architectures, utilizing hierarchical GNN and the siamese network-based supervised learning architecture. We evaluate $\textit{cfg2vec}$'s capability of predicting function names from stripped binaries. Our results show that $\textit{cfg2vec}$ outperforms the state-of-the-art by $24.54\%$ in predicting function names and can even achieve $51.84\%$ better given more training data. Additionally, $\textit{cfg2vec}$ consistently outperforms the state-of-the-art for all CPU architectures, while the baseline requires multiple training to achieve similar performance. More importantly, our results demonstrate that our $\textit{cfg2vec}$ could tackle binaries built from unseen CPU architectures, thus indicating that our approach can generalize the learned knowledge. Lastly, we demonstrate its practicability by implementing it as a Ghidra plugin used during resolving DARPA Assured MicroPatching (AMP) challenges.
Shih-Yuan Yu, Yonatan Gizachew Achamyeleh, Chonghan Wang, Anton Kocheturov, Patrick Eisen, Mohammad Abdullah Al Faruque
2023-01-06T21:45:50Z
http://arxiv.org/abs/2301.02723v1
# CFG2VEC: Hierarchical Graph Neural Network for Cross-Architectural Software Reverse Engineering ###### Abstract Mission-critical embedded software is critical to our society's infrastructure but can be subject to new security vulnerabilities as technology advances. When security issues arise, _Reverse Engineers_ (REs) use _Software Reverse Engineering_ (SRE) tools to analyze vulnerable binaries. However, existing tools have limited support, and REs undergo a time-consuming, costly, and error-prone process that requires experience and expertise to understand the behaviors of software and vulnerabilities. To improve these tools, we propose _cfg2vec_, a Hierarchical _Graph Neural Network_ (GNN) based approach. To represent binary, we propose a novel _Graph-of-Graph_ (GoG) representation, combining the information for control-flow and function-call graphs. Our _cfg2vec_ learns how to represent each binary function compiled from various CPU architectures, utilizing hierarchical GNN and the siamese network-based supervised learning architecture. We evaluate _cfg2vec_'s capability of predicting function names from stripped binaries. Our results show that _cfg2vec_ outperforms the state-of-the-art by 24.54% in predicting function names and can even achieve 51.84% better given more training data. Additionally, _cfg2vec_ consistently outperforms the state-of-the-art for all CPU architectures, while the baseline requires multiple training to achieve similar performance. More importantly, our results demonstrate that our _cfg2vec_ could tackle binaries built from unseen CPU architectures, thus indicating that our approach can generalize the learned knowledge. Lastly, we demonstrate its practicability by implementing it as a _Ghira_ plugin used during resolving DARPA _Assured MicroPatching_ (AMP) challenges. Software Reverse Engineering; Binary Analysis; Cross-Architecture; Machine Learning; Graph Neural Network; ## I Introduction In mission-critical systems, embedded software is vital in manipulating physical processes and executing missions that could pose risks to human operators. Recently, the _Internet of Things_ (IoT) has created a market valued at 19 trillion dollars and drastically grown the number of connected devices to approximately 35 billion in 2025 [1, 2, 3]. However, while IoT brings technological growth, it unintendedly exposes mission-critical systems to novel vulnerabilities [4, 5, 6]. The reported number of IoT cyberattacks increased by 300% in 2019 [7], while the discovered software vulnerabilities rose from 1.6k to 100k [8]. The consequence can be detrimental, as indicated in [9], the _Hearthped_ bug [10] can lead to a leakage of up to 64K memory, threatening not only personal but also organizational information security. Besides, _Shellshock_ is a bash command-line interface shell bug, but it has existed for 30 years and remains a threat to enterprises today [11, 12]. For mission-critical systems, unexpected disruptions can incur millions of dollars even if they only last for a few hours or minutes [13]. As a result, timely analyzing these impacted software and patching vulnerabilities becomes critical. However, mission-critical systems usually use software that can last for decades due to the criticality of the missions. Over time, these systems become legacy, and the number of newly-discovered threats can increase (as illustrated in Figure 1). Typically, for legacy software, the original development environment, maintenance support, or source code might no longer exist. To address vulnerabilities, vendors offer patches in the form of source code changes based on the current software version (e.g., ver 0.9). However, the only available data in the legacy system is binary based on its source code (e.g., ver 0.1). Such a version gap poses challenges in applying patches to the legacy binaries, leaving the only solution for applying patches to legacy software as direct binary analysis. Today, as Figure 2 shows, _Reverse Engineers_ (REs) have to leverage _Software Reverse Engineering_ (SRE) Fig. 1: Legacy software life cycle. tools such as _Ghidra_[14], _HexRays_[15], and _radare2_[16] to first disassemble and decompile binaries into higher-level representations (e.g., C or C++). Typically, these tools take the debugging information, strings, and the symbol-table and binary to reconstruct function names and variable names, allowing REs to rebuild a software's structure and functionality without access to source code [17]. For REs, these symbols encode the context of the source code and provide invaluable information that could help them to understand the program's logic as they work to patch vulnerable binaries. However, symbols are often excluded for optimizing the binary's footprint in mission-critical legacy systems where memory is limited. Because recovering symbols from _stripped binaries_ is not straightforward, most decompilers assign meaningless symbol names to coding elements. As for understanding the software semantics, REs have to leverage their experience and expertise to consume the information and then interpret the semantics of each coding element. Recent works tackle these challenges with _Machine Learning_ (ML), aiming to recover the program's information from raw binaries. For example, [18], and [19] associate code features to function names and model the relationships between such code features and the corresponding source-level information (variable names in [19], variable & function names in [18]). Meanwhile, [20] and [21] use an encoder-decoder network structure to predict function names from stripped binary functions based on instruction sequences and control flows. However, none of them support cross-architectural debug information reconstruction. On the other side, there exist works focusing on the cross-platform in their ML models [22, 23, 24]. These works focus on modeling the binary code similarity, extracting a real-valued vector from each control-flow graph (CFG) with attributed features, and then computing the _Structural Similarity_ between the feature vectors of binary functions built from different CPU architectures. In this paper, as part of a multi-industry-academia joint initiative between Siemens, the Johns Hopkins University Applied Physics Laboratory (JHU/APL), BAE Systems (BAE), and UCI, we propose _cfg2vec_, which utilizes a hierarchical _Graph Neural Network_ (GNN) for reconstructing the name of each binary function, aiming to develop the capacity for quick patching of legacy binaries in mission-critical systems. Our _Cfg2vec_ forms a _Graph-of-Graph_ (GoG) representation, combining CFG and FCG to model the relationship between binary functions' representation and their semantic names. Besides, _cfg2vec_ can tackle cross-architectural binaries thanks to the design of Siamese-based network architecture, as shown in Figure 3. One crucial use case of cross-architectural decompilation is _patching_, where the goal is to identify a known vulnerability or a bug and apply a patch. However, there can be architecture gaps when software with a bug can be compiled into many devices with diverse hardware architectures. For example, it is challenging to patch a stripped binary from an exotic embedded architecture compiled ten years ago that is vulnerable to a known attack such as _Heartbleed_[10]. While the reference patch is available in software, the reference architecture may not be readily available or documented, or the vendor may no longer exist. Under such circumstances, mapping code features across architectures is very helpful. It would allow for identifying similarities in code between a stripped binary that is vulnerable and its reference patch, even if the patch were built for a different type of CPU architecture. For _cfg2vec_, our targeted contributions are as follows: * We propose representing binary functions in _Graph-of-Graph_ (GoG) and demonstrate its usefulness in reconstructing function names from stripped binaries. * We propose a novel methodology, _cfg2vec_ that uses a hierarchical _Graph Neural Network_ (GNN) to model control-flow and function-calling relations in binaries. * We propose using cross-architectural loss when training, allowing _cfg2vec_ to capture the architecture-agnostic representations of binaries. * We release _cfg2vec_ in a GitHub repository: [https://github.com/AICPS/mindsight_cfg2vec](https://github.com/AICPS/mindsight_cfg2vec). * We integrate our _cfg2vec_ into an experimental Ghidra plugin, assisting the realistic scenarios of patching DARPA _Assured MicroPatching_ (AMP) challenge binaries. The paper is structured as follows: Section II discusses related works and fundamentals to provide a better understanding of the paper. Section III describes _cfg2vec_, including problem formulation, data preprocessing, and our main pipeline introduction. Section IV shows our experimental results. Lastly, we conclude the paper in Section V. ## II Related Work This section introduces software reverse engineering backgrounds, discusses the related works using machine learning to improve reverse engineering, and ultimately covers graph learning for binary analysis. ### _Software Reverse Engineering_ _Software Reverse Engineering_ (SRE) aims at understanding the behavior of a program without having access to its source code, often being used in many applications such as detecting malware [25, 26], discovering vulnerabilities, and patching bugs in _legacy software_[27, 28]. One primary tool that _Reverse Engineers_ (REs) use to inspect programs is _disassembler_ which translates a binary into low-level assembly code. Examples of such tools include _GNU Binutils' objdump_[29], _IDA_[15], Binary Ninja [30], and Hopper [31]. However, even with these tools, reasoning at the assembly level still requires considerable cognitive effort from RE experts. More recently, REs use _decompilers_ such as _Hex-Rays_[32], or _Ghidra_[14] to reverse the compiling process by further translating the output of disassemblers into the code that ensembles high-level programming languages such as C or C++ to reduce the burden of understanding assembly code. From assembly instructions, these decompilers can use program analysis and heuristics to reconstruct variables, types, functions, and control flow structure of a binary. However, the decompilation is incomplete even if these decompilers generate a higher-level output for better code understanding. The reason is that the compilation process discards the source-level information and lowers its abstraction level in exchange for a smaller footprint size, faster execution time, or even security considerations. The source-level information such as comments, variable names, function names, and idiomatic structure can be essential for understanding a program but is typically unavailable in the output of these decompilers. As Figure 2 demonstrated, REs use disassemblers or decompilers to generate high-level source code. Besides, [33] indicates REs will take notes and grant a name to those critical functions related to the vulnerabilities. This will create an annotated source code based on the high-level machine-generated source code. While annotating the source code, REs also analyze the significant part related to the vulnerability and ignore those general instructions or unrelated codes. At the same time, understanding the logic flow among functions is another major task they must focus on resolving their tasks. After classification, annotation, and understanding, REs experiment with several viable remedies to find the correct patch to fix the vulnerability. ### _Machine Learning for Reverse Engineering_ Software binary analysis is a straightforward first step to enhance security as developers usually deploy software in binaries [34]. Usually, experts conduct the patching process or vulnerability analysis by understanding the compilation source, function signatures, and variable information. However, after the compilation, such information is usually stripped or confuscated deliberately (e.g., _obfuscation_). Software binary analysis becomes more challenging in this case as developers have to recover the source-level information based on their experience and expertise. The early recovery work for binaries focuses on manual completion but suffers from low efficiency, high cost, and the error-prone nature of reverse engineering. As _Machine Learning_ (ML) has significantly advanced in its reasoning capability, applying ML and reconstructing higher-level source code information as an alternative to manual-based approaches has attracted considerable research attention. For example, [35] was the first approach that used neural network-based and graph-based models, predicting the function types to assist the reverse engineer in understanding the binary. [36] also predicted function names with neural networks, aggregating the related features of sections of binary vectors. Then, it analyzes the connections between each function in the source code (e.g., Java) and their corresponding function names for function name prediction. [18], on the other hand, did not use a neural network. It combined a decision-tree-based classification algorithm and a structured prediction with a probabilistic graphical model, then matched the function name by analyzing symbol names, types, and locations. However, [18] can only predict from a predetermined closed set, incapable of generalizing to new names. As the languages for naming functions are similar to natural language, recent research works start leaning toward the use of _Natural Language Processing_ (NLP) [20, 21, 37]. Precisely, these models predict semantic tokens based on the function names in the library, comprising the function name during inference. The underlying premise is that each token corresponds in some way to the attributes and functionality of the function. [20] uses _Control-Flow Graph_ (CFG) to predict function names. It combined static analysis with LSTM and transformer neural model to realize the name of functions. However, the dataset that consisted of unbalanced data and insufficient features was limited and hindered utter performance. [37] was designed to solve the limitation of the dataset. It provided _UbuntuDataset_ that contained more than 9 million functions in 22K software. [21] demonstrated the framework's effectiveness by building a large dataset. It considers the fine-grained sequence and structure information of assembly code when modeling and realizing function name prediction. Meanwhile, [21] reduced the diversity of data (instructions or words) while keeping the basic semantics unchanged, similar to word stemming and semantics in NLP. However, these works have low precision scores for prediction tasks, exampled by [21], only achieving around 41% in correctly predicting the function name subtokens. Moreover, the metrics for the inference of unknown functions are substantially lower [21], making it difficult for REs to find it helpful in practice. Although many existing works can reconstruct source-level information, none of them supports reconstructing cross-platform debug information. Cross-compilation is becoming more popular in the development of software. Hardware manufacturers, for instance, often reuse the same firmware code base across several devices running on various architectures [38]. A tool that performs cross-architecture function name prediction/matching would be beneficial if we have a stripped binary compiled for one architecture and a binary of a comparable program compiled for another architecture with debug symbols. We may use the binary with the debug symbols to predict the names of functions in the stripped binary, which significantly aids debugging. A tool that could capture the architecture-agnostic characteristics of binaries Fig. 2: The RE flow to solve security issues. would also help in malware detection as the source code of malware can be compiled in different architectures [38, 39]. Comparing two binaries of different architectures becomes more complicated because they will have different instruction sets, calling conventions, register sets, etc. Furthermore, assembly instructions from different architectures cannot often be compared directly due to the slightly different behavior of different architectures [40]. Cross-architecture function name prediction will assist in finding a malicious function in a program compiled for different architectures by learning its features from a binary compiled for just one architecture. The tools mentioned above are not architecture-agnostic; thus, we cannot utilize them for such applications. To address the flaws mentioned above, aid in creating more efficient decompilers, and make reverse engineering more accessible, we propose _cfg2vec_. Incorporating the cross-architectural siamese network architecture, our _cfg2vec_ can learn to extract robust features that encompass platform-independent features, enhancing the state-of-the-art by achieving function name reconstruction across cross-architectural binaries. ### _Graph Learning for Binary Analysis_ Graph learning has become a practical approach across fields [41, 42, 43, 44]. Although conventional ML can effectively capture the features hidden in Euclidean data, such as images, text, or videos, our work focuses more on the application where the core data is graph-structured. Graphs can be irregular, and a graph may contain a variable size of unordered nodes; moreover, nodes can have a varying number of neighboring nodes, making deep learning mathematical operations (e.g., 2D Convolution) challenging to apply. The operations in conventional ML methods can only be applied by projecting non-Euclidean data into low-dimensional embedding space. In graph learning, _Graph Embeddings_ (GE) can transform a graph into a vector (embedding of a graph) or a set of vectors (embedding of nodes or edges) while preserving the relevant and structural information about the graph [41]. _Graph Neural Network_ (GNN) is a model aiming at addressing graph-related tasks in an end-to-end manner, where the main idea is to generate a node's representation by aggregating its representation and the representations of its neighbors [42]. GNN stacks multiple graph convolution layers, graph pooling layers, and a graph readout to generate a low-dimensional graph embedding from high-dimensional graph-structured data. In software binary analysis, many approaches use _ControlFlow Graphs_ (CFGs) as the primary representations. For example, _Genius_ forms an _Attributed Control-Flow Graph_ (ACFG) representation for each binary function by extracting the raw attributes from each _Basic Block_ (BB), a straight-line code sequence with no branching in or out except at the entry and exit, in an ACFG [22]. _Genius_ measures the similarity of a pair of ACFGs through a bipartite graph matching algorithm, and the ACFGs are then clustered based on similarity. _Genius_ leverages a codebook for retrieving the embedding of an ACFG based on similarity. Another approach, _Gemini_, proposes a deep neural network-based model along with a Siamese architecture for modeling binary similarities with greater efficiency and accuracy than other state-of-the-art models of the time [23]. _Gemini_ takes in a pair of ACFGs extracted from raw binary functions generated from known vulnerability in code and then embeds them with a shared _Structure2vec_ model in their network architecture. Once embedded, _Gemini_ trains its model with a loss function that calculates the cosine similarities between two embedded representations. _Gemini_ outperforms models like _Genius_ or other approaches such as bipartite graph matching. In literature, there exist other works that consider the _Function Call Graph_ (FCG) as their primary data structures in binary analysis for malware detection [45]. Our _cfg2vec_ extracts relevant platform-independent features by combining the usage of CFG and FCG, resulting in a _Graph-of-Graph_ (GoG) representation for cross-architectural high-level information reconstruction tasks (e.g., function name). ## III CFG2vec Architecture This section begins with problem formulation. Next, as Figure 4 shows, we depict how our _cfg2vec_ extracts the _Graph-of-Graph_ (GoG) representation from each software binary. Lastly, we describe the network architecture in _cfg2vec_. ### _Problem Formulation_ In our work, given a binary code, denoted as \(p\), compiled from different CPU architectures, we extract a graph-of-graph (GoG) representation, \(\mathcal{G}=(\mathcal{V},\mathcal{A})\) where \(\mathcal{V}\) is the set of nodes and \(\mathcal{A}\) is the adjacency matrix (As Figure 3 shows). The nodes in \(\mathcal{V}\) represent functions and the edges in \(\mathcal{A}\) indicate their cross-referencing relationships. That says, each of the node \(f_{i}\in\mathcal{V}\) is a CFG, and we denote it as \(f_{i}=(B,A,\phi)\) where the nodes in \(B\) represent the basic blocks and the edges in \(A\) denote their dependency relationships. \(\phi\) is a mapping function that maps each basic block in the assembly form to its corresponding extracted attributes \(\phi(v_{i})=C^{k}\) where C is a numeric value, and k is the number of attributes for the basic block (BB). Whereas the CFG structure is meant to provide more information at the lower BB level, the GoG structure is intended for recovering information at the overarching function level between the CFGs. Figure 3 is an example of a partial GoG structure with a closer inspection of one of its CFG nodes and another of a single CFG BB node, showing the set of features corresponding to that BB node. The goal is to design an efficient and effective graph embedding technique that can be used for reconstructing the function names for each function \(f_{i}\in\mathcal{V}\). ### _Ghidra Data ToolKit for Graph Extraction_ To extract the structured representation required for _cfg2vec_ we leverage the state-of-the-art decompiler _Ghidra_[14] and the _Ghidra Headless Analyzer_1. The _headless analyzer_ is a command-line version of _Ghidra_ allowing users to perform many tasks (such as analyzing a binary file) supported by _Ghidra_ via a command-line interface. For extracting GoG from a binary, we developed our _Ghidra Data Toolkit_ (GDT); GDT is a set of Java-based metadata extraction scripts used for instrumenting _Ghidra Headless Analyzer_. First, GDT programmatically analyzes the given executable file and stores the extracted information in the internal Ghidra database. Ghidra provides a set of APIs to access the database and retrieve the information about the analyzed binary. GDT uses these APIs to export information such as Ghirda's PCode and call graph for each function. Specifically, the _FunctionManager_ API allows us to manipulate the information of each decompiled function in the binary and acquire the cross-calling dependencies between functions. For each function, we utilized another Ghidra API _DecompInterface2_ to extract 12 attributes associated with each basic block in a function. These attributes precisely correspond to the total number of instructions, including arithmetic, logic, transfer, call, data transfer, SSA, compare, and pointer instructions, as well as other instructions not falling within those categories and the total number of constants and strings within that BB. Lastly, by integrating all of the information, we form a GoG representation \(\mathcal{G}\) for each binary \(p\). We repeat this process until all binaries are converted to the GoG structure. We feed the resulting GoG representations to our model in batches, with the batch size denoted as B. Footnote 2: Documentation of _Ghidra API DecompInterface_: [https://ghidra.re/ghidra_docs/api/ghidra/app/decompiler/DecompInterface.html](https://ghidra.re/ghidra_docs/api/ghidra/app/decompiler/DecompInterface.html) ### _Hierarchical Graph Neural Network_ Once \(\mathcal{G}\) is extracted from the GDT, we then feed it to our hierarchical network architecture (inspired from [46]) that contains both _CFG Graph Embedding_ layer and _GoG Graph Embedding Layer_ as Figure 4 shows. For each GoG structure, we denote it as \(\mathcal{G}=(\mathcal{V},\mathcal{A})\) where \(\mathcal{V}\) is a set of functions associated with \(\mathcal{G}\) and \(\mathcal{A}\) indicates the calling relationships between the functions in \(\mathcal{V}\). Each function in \(\mathcal{V}\) is in the form of CFG \(f_{i}=(B,A,\phi)\) where each node \(b\in B\) is a BB represented in a fixed-length attributed vector \(b\in R^{d}\), and \(d\) is the dimension that we have mentioned earlier. \(A\) encodes the pair-wise control-flow dependency relationships between these BBs. #### Iii-C1 CFG Graph Embedding Layer Our network architecture first feeds all functions in a batch of GoGs to the _CFG Graph Embedding Layer_ consisting of multiple graph convolutional layers and a graph readout operations. The input to this layer is a function \(f_{i}=(B,A,\phi)\) and the output is the fixed-dimensional vector representing a function. For each BB \(b_{k}\) we let \(b_{k}^{0}=b_{k}\), and we update \(b_{k}^{t}\) to \(b_{k}^{t+1}\) with the graph convolution operation shown as follows: \[b_{k}^{t+1}=f_{G}(Wb_{k}^{t}+\sum_{b_{m}\in A_{k}}Mb_{m}^{t})\] where \(f_{G}\) is a non-linear activation function such as ReLU, \(A_{k}\) is the list of adjacent BBs for \(b_{k}\), and \(W\in R^{d\times d}\) and \(M\in R^{d\times d}\) are the weights to be learned during the training. We run \(T\) iterations of such a convolution, which can be a tunable hyperparameter in our model. During the updates, each BB gradually aggregates the global information of the control-flow dependency relations into its representation, utilizing the representation of its neighbor. We obtain the final representation for each BB as \(b_{k}^{T}\). To acquire the representation for the function \(f_{i}\), we apply a graph readout operation such as _sum-readout_, described as follows, \[g^{(T)}=\sum_{b_{k}\in B}b_{k}^{T} \tag{1}\] We assign the value of \(g^{(T)}\) (a.k.a. CFG embedding) to \(f_{i}\). The graph readout operation can be replaced with _mean-readout_ or _max-readout_. #### Iii-C2 GoG Graph Embedding Layer Once all the functions have been converted to fixed-length graph embeddings, we then feed \(\mathcal{G}\) to the second layer of _cfg2vec_, the _GoG Embedding Layer_. Here, for each function \(f_{i}\) we apply another \(L\) iterations of graph convolution with \(\mathcal{F}\) and \(\mathcal{C}\). The updates can be illustrated as follows, \[f_{k}^{(l+1)}=f_{GoG}(Uf_{k}^{l}+\sum_{f_{m}\in C_{k}}Vf_{m}^{(l)}) \tag{2}\] where \(f_{GoG}\) is a non-linear activation function and \(C_{k}\) is the list of adjacent functions (calling) for the function \(f_{k}\) and \(U\in R^{d\times d}\) and \(V\in R^{d\times d}\) are the weights to be learned during the training. Lastly, we take the \(f_{k}^{(L)}\) as the representation that considers both CFG and GoG graph structures. We use these updated representations to perform cross-architecture function similarity learning. Fig. 3: An example of a _Graph-of-Graph_ (GoG) of a binary compiled from a package Freccell with amd64 CPU architecture. #### Iii-B3 Siamese-based Cross-Architectural Function Similarity Given a batch of GoGs \(B=\{GoG_{1},GoG_{2},...,GoG_{B}\}\), we apply the hierarchical graph neural network to acquire the set of updated function embeddings, denoted as \(B_{F}=\{f_{1}^{(T)},f_{2}^{(T)},...,f_{K}^{(T)}\}\). We calculate the function similarity for each function pair with cosine similarity, denoted as \(\hat{y}\in[-1,1]\). The loss function \(J\) between \(\hat{Y}\) and a ground-truth label \(y\), which indicates whether a pair of functions have the same function or not, is calculated as follows, \[J(\hat{y},y)=\left\{\begin{array}{ll}1-y,&\text{if y=1,}\\ MAX(0,\hat{y}-m),&\text{if y=-1,}\end{array}\right. \tag{3}\] the final loss \(L\) is then calculated as follows, \[L=H(Y,\hat{Y})=\sum_{i}(J(\hat{y_{i}},y_{i})), \tag{4}\] where \(Y\) stands for ground-truth labels (either similarity or dissimilarity), and \(\hat{Y}\) represents the corresponding predictions. More specifically, we denote a pair of functions as similar if they are the same but compiled with different CPU architectures. The \(m\) is a constant to prevent the learned embeddings from becoming distorted (by default, \(0.5\)). To maintain the balance between positive and negative training samples, we developed a custom batching algorithm. The function leverages the knowledge gained by adding a binary of some package to a given batch to find and add a binary for the same package, built for a different architecture, to the provided batch as a positive sample. It will also include a binary from another package as a negative sample. This will give any batch a balanced proportion of positive and negative samples. Finally, we use the loss \(L\) to update all the associated weights in our neural networks with an _Adam_ optimizer. Once trained, we then use the model to perform function name reconstruction tasks. ## IV Experimental Results In this section, we evaluate _cfg2vec_'s capability in predicting function names. We first describe the dataset preparation and the training setup processes. Then, we present the comparison of _cfg2vec_ against baseline in predicting function names. Although many baseline candidates tackle the same problem [18, 20, 37, 21], some require purchasing a paid version of IDA Pro to preprocess datasets, and some even do not open source their implementations. Therefore, [18] was the only feasible choice, as running other models using our datasets was almost impossible. Next, we also show the result of the ablation study over _cfg2vec_. Besides, we exhibit that our _cfg2vec_ can perform architecture-agnostic prediction better than the baseline. Lastly, we illustrate the real-world use case where our _cfg2vec_ is integrated as a _Ghidra_ plugin application for assisting in resolving challenging reverse engineering tasks. We conducted all experiments on a server equipped with Intel Core i7-7820X CPU @3.60GHz with 16GB RAM and two NVIDIA GeForce GTX Titan Xp GPUs. ### _Dataset Preparation_ Our evaluating data source is the ALLSTAR (_Assembled Labeled Library for Static Analysis Research_) dataset, hosted by _Applied Physics Laboratory_ (APL) [47]. It has over 30,000 Debian Jessie packages pre-built from i386, amd64, ARM, MIPS, PPC, and s390x CPU architectures for software reverse engineering research. The authors used a modified _Dockcross_ script in docker to build each package for each supported architecture. Then, they saved each resulting ELF with its symbols, the corresponding source code, header files, intermediate files (.o,.class,.gkd,.gimple), system headers, and system libraries altogether. To form our datasets, we selected the packages that have ELF binaries built for the amd64, armel, i386, and mipsel CPU architectures. i386 and amd64 are widely used by general computers, especially in the Intel and AMD products, respectively. MIPS and ARM are crucial in embedded systems, smartphones, and other portable electronic devices [48]. In practice, we excluded the packages with only one CPU architecture in the ALLSTAR dataset. Additionally, due to our limited local computing resources, we eliminated packages that were too large to handle. We checked each selected binary on whether the ground-truth symbol information exists using the _Ghidra_ decompiler and Linux file command and removed the ones that do not have them. Lastly, we assembled our primary dataset, called the _AS-4cpu-30k-bin_ dataset, that consists of 27572 pre-built binaries from 1117 packages and 4 CPU architectures, as illustrated in Table I. Our preliminary experiment revealed that the evaluation had a data leakage issue when splitting the dataset randomly. Therefore, we performed a non-random variant train-test split with a 4-to-1 ratio on the _AS-4cpu-30k-bin_ dataset, selecting roughly 80% of the binaries for the training dataset and leaving the rest for the testing dataset. We referenced [23] for their splitting methods, aiming to ensure that the binaries Fig. 4: The architecture of _cfg2vec_ with a supervised hierarchical graph neural network approach. that belong to the same packages stay in the same set, either the training or testing sets. Such a variant splitting method allows us to evaluate _cfg2vec_ truly. Next, we converted binaries in the _AS-4cpu-30k-bin_ dataset into their _Graph-of-Graph_ (GoG) representations leveraging the GDT mentioned previously in Section III-B. Notably, we processed a batch of binaries related to one package at one time as developers might define user functions in different modules of the same package while putting prototype declarations in that package's main module. For this case, _Ghidra_ indeed recognizes two function instances while one only contains the function declaration and another has its actual function content. As these two instances correspond to the same function name and one contains only dummy instructions, they can thus create noise in our datasets, thus affecting our model's learning. To cope with this, our GDT also searches from other binaries of the same package for the function bodies. If found, our GDT associates that user function with the function graph node with the actual content data. Besides user functions, library function calls may exist, and searching their function bodies in the same package would fail for dynamically loaded binaries. Under such circumstances, _Ghidra_ would recognize these functions as _ThunkFunctions3_ which only contain one dummy instruction. As a workaround, we removed these _ThunkFunctions_ from our data as they might mislead the model's learning. Applying this workaround indicates that our model works in predicting function names for the user and statically linked functions. Footnote 3: ThunkFunction Manual: [https://ghidra.re/ghidra_docs/api/ghidra/program/model/listing/ThunkFunction.html](https://ghidra.re/ghidra_docs/api/ghidra/program/model/listing/ThunkFunction.html) We experimented [18] with our datasets, referencing to their implementation4. As [18] used a dataset with 3,000 binaries for experiments, we followed accordingly, preparing datasets with smaller but similar sizes. We achieved this by downsampling from our primary _AS-4cpu-30k-bin_ dataset, creating the _AS-3cpu-9k-bin_ dataset which has 9,000 binaries for i386, amd64, and armel CPU architectures. Furthermore, as [18] supports only one CPU architecture at a time, we then separated the _AS-3cpu-9k-bin_ dataset into different CPU architectures, generating three training datasets for testing [18]: _AS-i386-3k-bin_, _AS-amd64-3k-bin_, and _AS-armel-3k-bin_. For training, we utilized the strip Linux command, converting our original data into three: the original binaries (_debug_), stripped binaries with debug information (_stripped_), and stripped binaries without debug information (_stripped_wo_symtab_) to follow [18]'s required data format. For evaluation, we sampled 100 binaries from our primary dataset for each CPU architecture, labeled _AS-amd-100-bin_, _AS-i386-100-bin_, _AS-armel-100-bin_, and _AS-mipsel-100-bin_. We also have another evaluation dataset called _AS-noMipsel-300-bin_, which contains roughly 300 binaries produced for the amd64, i386, and armel platforms. Table I summarizes the data statistics for all these datasets, including the numbers of packages and binaries, the average number of function nodes, edges, and BB nodes. The following sections will detail how we utilized these datasets during our experiments. Footnote 4: Debin’s [18] repository: [https://github.com/eth-sri/debin](https://github.com/eth-sri/debin) ### _Evaluation: Function Name Prediction_ Table II demonstrates the results of _cfg2vec_ in predicting function names. For the baseline, we followed [18]'s best setting where the feature dimension of register or stack offset are both 100 to train with our prepared datasets. For _cfg2vec_, we used three GCN layers and one GAT convolution layer in both graph embedding layers. For evaluation, we calculate the p@k (e.g., precision at k) metric, which refers to an average hit ratio over the top-k list of predicted function names. Specifically, we feed each binary represented in GoG into our trained model, converting each function \(f\in F\) and acquiring its function embedding \(h_{f}\). Then, we calculate pairwise cosine similarities between \(h_{f}\) and all the other function embeddings, forming a top-k list by selecting k names in which their embeddings are top-kth similar to \(h_{f}\). If the ground-truth function name is among the top-k list of function name predictions, we regard that as a hit; otherwise, it is a miss. During experiments, we set the top-k value to be 5, so our model can recommend the best five possible names for each function in a binary. As shown in Table II, _cfg2vec_, trained with the _AS-3cpu-9k-bin_ dataset, can achieve a 69.75% prediction accuracy (e.g., p@1) in inferring function names. For [18], we had to train their models for each CPU architecture separately as it cannot train in a cross-architectural manner. Even so, for amd64 binaries, [18] only achieves 29.32% precision, while for i386 and armel, it performs 52.64% and 53.65%, respectively. This result indicates that in any case, our _cfg2vec_ outperforms [18]. Besides, while [18] only yields one prediction, our _cfg2vec_ suggests five choices, making it flexible for our users (e.g., REs) to select what they believe best fits the function among the best k predicted names. The p@2 to p@5 in Table II demonstrate that our _cfg2vec_ can provide enough hints of function names for users. For example, p@5 of _cfg2vec_ trained with our _AS-3cpu-9k-bin_ dataset can achieve 70.50% precision across all the CPU architecture binaries. We also experimented our _cfg2vec_ with larger datasets. From Table II, we can observe that _cfg2vec_ can have 5.04% performance gain in correctly predicting function names (e.g., p@1). Moreover, the gain increases to 28% when training _cfg2vec_ with the _AS-4cpu-30k-bin_ dataset. We believe training on a larger dataset implies training with a more diversified set of binaries. This allows our model to acquire more knowledge, thus being capable of extracting more robust features for binary functions. In summary, this result indicates that compared to the baseline, our model can effectively provide contextually relevant names for functions in the decompiled code to our users. We also experimented with various ablated network setups to study how each component of _cfg2vec_ contributes to performance. First, we simplified our _cfg2vec_ by stripping one GCN layer from the original experimental setup. As shown in Table III, we called this setup _2GCN-GAT_ which slightly decreased the performance by 0.75%. Then, from _2GCN-GAT_ setup, we further removed the GAT layer, calling it _2GCN_. We again observed a marginal performance decrease (\(<\)1%). Next, we eliminated another GCN layer from _2GCN-GAT_, constructing the _GCN-GAT_ setup. For _GCN-GAT_, we saw a drastic drop (4.2%) which highlights that the number of GCN layers can be an essential factor in the performance. Specifically, we found that going from 1 to 2 GCN layers improves prediction accuracy by more than 4%. However, we do not observe a significant performance gain when increasing the number of GCN layers to more than three. Therefore, we retained the original _cfg2vec_ model with its three GCN layers. All in all, as shown in Table III, all these ablated models, still outperform [18], which we attributed to the GoG representation we made for each binary in the dataset. ### _Evaluation: Architectural-agnostic Prediction_ Table IV demonstrates our _cfg2vec_'s capability in terms of cross-architecture support. As [18] supports training one CPU architecture at a time, we had to train it multiple times during experiments. Specifically, we trained [18] on three datasets: _AS-amd64-3k-bin_, _AS-i386-3k-bin_, and _AS-armel-3k-bin_, calling resulting trained models, [18]-amd64, [18]-i386, and [18]-armed, respectively. For these baseline models, we observe that they perform well when tested with the binaries built on the same CPU architecture but poorly with the ones built on different CPU architectures. For instance, [18]-amd64 achieves 29.3% accuracy for amd64 binaries, but performs worse for i386 and armel binaries (13.8% and 7.1%). Similarly, [18]-i386 achieves 52.6% accuracy for i386 binaries, but performs worse for amd64 and armel binaries (6.2% and 1.1%). Lastly, [18]-armel achieves 53.6% accuracy for armel binaries, but performs worse for amd64 and i386 binaries (11.8% and 8.9%). We used the top-1 prediction generated from _cfg2vec_ (a.k.a., p@1) as the comparing metric as [18] produces only one prediction per each function. From the results, we observe that _cfg2vec_ outperforms [18] across all three tested CPU architectures. The fact that _cfg2vec_ performs consistently well across all CPU architectures indicates that our _cfg2vec_ supports cross-architecture prediction. To evaluate the capability of generalizing the learned knowledge, we tested all models with the _AS-mipsel-100-bin_ dataset, which has binaries built from another famous CPU architecture, mipsel, that our _cfg2vec_ does not train before. For [18], it has lower performance when testing on binaries built from the CPU architectures that it did not train before, exampled by the highest accuracy of [18] to be 13.84% when trained on _amd64_ binaries and evaluated on i386 binaries. In our work, as Table IV shows, our _cfg2vec_ achieves 36.69% accuracy when trained with amd64, i386, and armel binaries but tested on mipsel binaries. For [18], it does not even support analyzing mipsel binaries. In short, these results demonstrate that our _cfg2vec_ outperforms our baseline in the function name prediction task on cross-architectural binaries and generalizes better to the binaries built from unseen CPU architectures. To further investigate _cfg2vec_'s cross-architecture performance, we trained it on three datasets, each consisting of binaries built for two different architectures. We then gave the resulting trained models names that indicated the architectures from which the binaries were derived: _cfg2vec_-armel-i386, _cfg2vec_-amd64-i386, and _cfg2vec_-armel-amd64. These results show that our model performs well in the function name prediction job across all of these scenarios, including when tested on binaries compiled for unknown CPU architectures. ### _The Practical Usage of CFG2VEC_ In this section, we demonstrate how _cfg2vec_ assists REs in dealing with _Defense Advanced Research Projects Agency_ (DARPA) _Assured MicroPatching_ (AMP) challenges binaries. The AMP program aims at enabling fast patching of legacy mission-critical system binaries, enhancing decompilation and guiding it toward a particular goal of a _Reverse Engineer_ (RE) by integrating the existing source code samples, the original build process information, and historical software artifacts. #### Iv-D1 The MINDSIGHT project our multi-industry-academia initiative between Siemens, JHU/APL, BAE, and UCI jointly developed a project, _Making Intelligible Decompiled Source by Imposing Homomorphic Transforms_ (MINDSIGHT). Our team focused on building an automated toolchain integrated with _Ghidra_, aiming to enable the decompilation process with (1) a less granular identification of modular units, (2) an accurate reconstruction of symbol names, (3) the lifting of binaries to stylized C code, (4) a principled and scalable approach to reason about code similarity, and (5) the benchmarking of new decompilation techniques using state-of-the-art embedded software binary datasets. To date, our team has developed an open-source tool, _CodeCut5_, to improve the accuracy and completeness of _Ghidra_'s module identification, providing an automated script-based decompilation analysis toolchain to ease the RE's expert interpretation. Besides, we also developed a _Homomorphic Transform Language_ (HTL) to describe transformations on _Abstract Syntax Tree_ (AST) languages and the rules of their composition. Our tool, integrated with _ghidra_, allows developers to transform the decompiled code syntactically while keeping it semantically equivalent. The key idea is to use this HTL to morph a _Ghidra_ AST into a GCC AST to lift the decompiled binary to a high-level C representation. This process can make it easier for REs to comprehend the binary code. _cfg2vec_ is another tool developed in the MINDSIGHT project, enabling the reconstruction of function names, saving the manual guesswork from REs. Footnote 5: _CodeCut_’s repository: [https://github.com/DARPAMINDSIGHT/CodeCut](https://github.com/DARPAMINDSIGHT/CodeCut) #### Iv-D2 The cfg2vec plugin In _MINDSIGHT_ project, we incorporated _cfg2vec_ into _Ghidra_ decompiler as a plugin application. Our _cfg2vec_ plugin assists REs in comprehending the binaries by providing a list of potential function names for each function without its name. Technically, like all _Ghidra_ plugins, our _cfg2vec_ plugin bases on Java with its core inference modules implemented as a REST API in Python 3.8. Once the metadata of a stripped binary is extracted from _Ghidra_ decompiler, it is then sent to the _cfg2vec_ end-point, which calculates and returns the inferred mappings for all the functions. Figure 5 demonstrates the user interface of our _cfg2vec_ plugin. In this scenario, the user must provide the vulnerable and the reference binary with extra debug information, such as function names. The "Match Functions" button triggers _cfg2vec_ functionality and displays the function mapping results in three tables: * _Matched Table_: displays the mapping of similar functions. * _Mismatched Table_: displays the mapping of _dissimilar_ functions and, therefore, candidates for patching. * _Orphan Table_: displays the mapping of functions with a low confidence score. The groupings reduce REs' workload. Rather than inspecting all functions, they can focus on patching candidate functions (mismatched functions) and the orphans. The "Explore Functions" button invokes Ghidra's function explorer, where the two functions can be compared side-by-side, as shown in Figure 5. This utility allows the user to switch between C and assembly language, thus assisting in confirming or modifying the mappings from the three tables. Regarding Fig. 5: The plugin screenshot integrated into Ghidra. _cfg2vec_'s function prediction, the "Rename Function" button takes the selected row from the tables and imposes the name from the patched binary in the vulnerable binary. When the "Match Functions" button fires, we invoke the FCG and CFG generators for the two programs (vulnerable and patched). #### Iv-B3 The use-case for AMP challenge binaries DARPA AMP challenges is about REs to patch a vulnerability regarding a weak encryption algorithm where the encryption of communication traffic was accomplished with a deprecated cipher suite, Triple DES or 3DES [49]. For this challenge, REs have to analyze the vulnerable binary, identify functions and instructions to be patched, _3DES cipher suite_ in this case, and patch 3DES-related function calls and instructions with the ones for AES [50]. All these steps happen at the decompiled binary level, and the vulnerable binaries are optimized by a compiler and stripped of the debugging information and function names. Furthermore, these binaries are sometimes statically linked against libraries such as GNU C Library [51] or OpenSSL, which introduce many extra functions to the binary (some of which will never be called/used). Given these complications, it becomes a non-trivial task for an RE to make sense of all these functions, find the problem, and successfully patch the problem. The direct usage of our _cfg2vec_ plugin was to pick a function of interest with stripped information and see predictions of potential function names or matching functions from the available reference binary to confirm that whether this function is in the critical path during RE's problem solving. As Figure 5 shows, our plugin allows users to see possible matches between functions from a stripped vulnerable binary and functions from a patched (reference) binary with extra information. REs may then leverage such information and make appropriate notes for that function, allowing them to complete their jobs more efficiently. The main feedback we received from REs who used the tool was that this is the functionality REs would like to have. However, the accuracy and usability of the tool were not high enough to truly utilize the tool's potential. ## V Conclusion This paper presents _cfg2vec_, a Hierarchical Graph Neural Network-based approach for software reverse engineering. Building on top of _Ghidra_, our _cfg2vec_ plugin can extract a _Graph-of-Graph_ (GoG) representation for binary, combining the information from Control-Flow Graphs (CFG) and Function-Call Graphs (FCG). _Cfg2vec_ utilizes a hierarchical graph embedding framework to learn the representation for each function in binary code compiled into various architectures. Lastly, our _cfg2vec_ utilizes the learned function embeddings for function name prediction, outperforming the state-of-the-art [18] by an average of 24.54% across all tested binaries. By increasing the amount of data, our model achieved 51.84% better. While [18] requires training once for each CPU architecture, our _cfg2vec_ still can outperform consistently across all the architectures, only with one training. Besides, our model generalizes the learning better [18] to the binaries built from untrained CPU architectures. Lastly, we demonstrate that our _cfg2vec_ can assist the real-world REs in resolving _Darpa Assured MicroPatching_ (AMP) challenges. ## Acknowledgment This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) and Naval Information Warfare Center Pacific (NIWC Pacific) under Contract Number N66001-20-C-4024. The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.
2310.14166
Ensemble Learning for Graph Neural Networks
Graph Neural Networks (GNNs) have shown success in various fields for learning from graph-structured data. This paper investigates the application of ensemble learning techniques to improve the performance and robustness of Graph Neural Networks (GNNs). By training multiple GNN models with diverse initializations or architectures, we create an ensemble model named ELGNN that captures various aspects of the data and uses the Tree-Structured Parzen Estimator algorithm to determine the ensemble weights. Combining the predictions of these models enhances overall accuracy, reduces bias and variance, and mitigates the impact of noisy data. Our findings demonstrate the efficacy of ensemble learning in enhancing GNN capabilities for analyzing complex graph-structured data. The code is public at https://github.com/wongzhenhao/ELGNN.
Zhen Hao Wong, Ling Yue, Quanming Yao
2023-10-22T03:55:13Z
http://arxiv.org/abs/2310.14166v1
# Ensemble Learning for Graph Neural Networks ###### Abstract Graph Neural Networks (GNNs) have shown success in various fields for learning from graph-structured data. This paper investigates the application of ensemble learning techniques to improve the performance and robustness of Graph Neural Networks (GNNs). By training multiple GNN models with diverse initializations or architectures, we create an ensemble model named ELGNN that captures various aspects of the data and uses the Tree-Structured Parzen Estimator algorithm to determine the ensemble weights. Combining the predictions of these models enhances overall accuracy, reduces bias and variance, and mitigates the impact of noisy data. Our findings demonstrate the efficacy of ensemble learning in enhancing GNN capabilities for analyzing complex graph-structured data. The code is public at [https://github.com/wongzhenhao/ELGNN](https://github.com/wongzhenhao/ELGNN). ## 1 Introduction Graph is ubiquitous in a wide range of real-world relationships, ranging from social and rating networks Newman et al. (2002) to biology networks Gilmer et al. (2017). Currently, the Graph Neural Networks (GNNs) have been proposed for learning representations over graph-structured data. GNNs capture local graph structure and feature information via iterative aggregation of features from neighbors using non-linear transformations. In this manner, GNNs have shown promising performance on various applications on graph data. While GNNs can achieve fairly accurate results in numerous graph-based learning tasks, their enhanced ability to represent information comes at the cost of increased model complexity. This results in overfitting, which diminishes the model's capacity for generalization. The typical approach to addressing overfitting is Dropout Srivastava et al. (2014), a widely used regularization technique, but it may not be sufficient to prevent overfitting on its own. Given the _ogbl-ddi_ dataset, which is part of the Open Graph Benchmark Hu et al. (2020) and represents a homogeneous, unweighted, undirected graph that models the network of drug-drug interactions, the primary goal is to prioritize true drug-drug interactions over false ones. In the context of the OGB leaderboard, existing methods predominantly revolve around Graph Neural Networks (GNNs) that capture characteristics of neighboring nodes. Some of these methods also incorporate edge features, like distance encoding, into their existing GNN frameworks. Additionally, a novel approach, PLNLP Wang et al. (2021), builds upon GraphSage Hamilton et al. (2017) and introduces a pairwise loss function, a common technique in siamese networks for ranking problems, which leads to improved performance. Furthermore, models based on attention mechanisms, such as AGDN Sun et al. (2020), have also demonstrated outstanding performance. Ensemble learning is a widely recognized approach that enhances the effectiveness of machine learning tasks through the amalgamation and adjustment of predictions made by multiple models Yue et al. (2023); Dietterich (2000). The primary reason for the effectiveness of the ensemble method is that the training dataset lacks sufficient information to identify the best single learner. Additionally, the hypotheses being explored and the scoring functions used to represent various characteristics may not encompass the actual target function. To address the limitation of existing GNNs which do not accurately represent the characteristics of the data, we proposed an innovative method called Ensemble Learning for Graph Neural Networks (ELGNN). In our work, we demonstrated how different models approach and prioritize structured information and node information. We explored various models to find an optimal and balanced weight ratio to observe their effects. Additionally, we employed Tree-Structured Parzen Estimator (TPE) for parameter search. With the integration of TPE for parameter search, our model consistently outperforms the latest GNN models and heuristic methods and has secured the top position on the _ogbl-ddi_ leaderboard for link prediction. This highlights the effectiveness of our approach in addressing the limitations of existing GNNs and optimizing model performance. ## 2 Proposed Methods ### Preliminaries Our framework, ELGNN, aims to enhance the weight allocation among a set of individual machine learning models, resulting in enhanced performance for link prediction. We begin by introducing fundamental concepts of graph neural networks for link prediction and reviewing the leading model techniques on the _ogbl-ddi_ dataset. Following that, we present our framework, ELGNN. #### 2.1.1 Notations Denoted an undirected graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) with N nodes and E edges, where \(\mathcal{V}=\{v_{1},v_{2},v_{3},...,v_{N}\}\) and \(\mathcal{E}=\{e_{ij}|v_{i},v_{j}\in\mathcal{V}\}\). Within the graph, there are two types of information: structural information, which outlines how nodes are connected, and node feature information, which characterizes the attributes of the nodes. The connections between edges can be denoted using an adjacency matrix, \(A\in\{0,1\}^{N\times N}\). Let the initial node feature matrix \(\mathcal{X}=\mathbb{R}^{N\times d^{(0)}}\), \(\mathcal{X}=\{x_{1},x_{2},x_{3},...x_{N}\}\), where every \(x_{i}\in\mathcal{X}\) represents the d dimension node feature of \(v_{i}\). #### 2.1.2 Graph Neural Networks for Link Prediction Formally, the \(l\)-th layer in each GNN is defined as follows: \[h^{(l)}=\sigma(A_{GNN}H^{(l-1)}W^{(l-1)}), \tag{1}\] where the \(A_{GNN}\in\mathbb{R}^{N\times N}\) is the normalized adjacency matrix depending on each GNN architecture. Generally, the widely used GNN architecture are GCN (Kipf and Welling, 2016), GAT (Velickovic et al., 2017), SEAL (Zhang et al., 2020), Graph-sage (Hamilton et al., 2017). For example, \(W^{(l)}\in\mathbb{R}^{d^{(l)}\times d^{(l+1)}}\) is a trainable weight matrix meanwhile \(\sigma(\cdot)\) is an activate function, such as Softmax, ReLU, and Sigmoid, subsequently employed to forecast the presence of a link between node i and node j. After completing L layers of message passing, the node representations \(H^{(L)}\) subsequently employed to forecast the presence of a link between node \(i\) and node \(j\): \[\hat{y}_{i,j}=\sigma(s(h_{i}{}^{(L)},h_{j}{}^{(L)})), \tag{2}\] where \(s(\cdot,\cdot)\) represents a link scoring function, usually MLP, and \(h_{i}{}^{(L)}\) is the representation of the node i from \(H^{(L)}\). #### 2.1.3 Embedding The rapid adoption of embedding techniques following the introduction of word2vec has highlighted the critical importance of efficient embedding operations in influencing model performance. Embedding is the process of transforming complex graph data into low-dimensional vectors. It overcomes limitations in machine learning directly on the original graph and provides a more flexible and efficient computational approach. Furthermore, embedding effectively compresses large-scale graph data, avoiding the need to handle massive adjacency matrices. However, embedding must satisfy requirements such as attribute selection, scalability, and dimensionality selection to ensure an accurate description of graph topology and attributes, and to be suitable for large-scale networks. In this way, embedding enhances the performance and feasibility of applications with graph data. In general, embeddings are categorized into node embedding and graph embedding. Classic methods for node embedding include DeepWalk (Perozzi et al., 2014), Node2vec (Grover and Leskovec, 2016), and SDNE (Wang et al., 2016). Graph embedding condenses the entire graph into a single vector, as seen in methods like Graph2vec (Narayanan et al., 2017), which employs the skip-gram concept to convert the entire graph into a vector space. ### Representative Methods on OGB While it may appear that employing a greater number of learner members would result in enhanced performance, Zhou et al. established the "many could be better than all" theorem, challenging this notion (Zhou et al., 2002). Opting for the choice of assembling an ensemble with a few effective models rather than utilizing all of them is generally a more favorable decision. In this subsection, we provide a concise overview of some top strategies on _ogbl-ddi_ dataset. #### 2.2.1 Adaptive Graph Diffusion Network Chuxiong Sun et al. (Sun et al., 2020) proposed an Adaptive Graph Diffusion Network (AGDN) which executes multi-layer generalized graph diffusion across various feature spaces with reasonable computational complexity and runtime. Typical graph diffusion techniques utilize extensive powers of the transition matrix along with predefined weighting coefficients. In contrast, AGDNs merge smaller multi-hop node representations with adaptable and generalized weighting coefficients. They propose two scalable mechanisms of weighting coefficients, Hop-wise Attention (HA) and Hop-wise Convolution (HC), to capture multi-hop information in a graph. They believe that AGDN can outperform other methods based on structural features due to the high density of the _ogbl-ddi_ dataset, rendering structural patterns almost irrelevant. #### 2.2.2 Graph Inception Diffusion Network Graph Inception Diffusion Network (GIDN) (Wang et al., 2022) extends graph diffusion across diverse feature spaces and employs the inception module to mitigate the substantial computational load arising from intricate network architectures. "Inception" refers to a neural network architecture module that was popularized by Google's Inception models (Szegedy et al., 2015). It is designed to capture a wide range of features at various levels of abstraction within the network, which can be particularly useful for image recognition tasks. The module uses a combination of different filter sizes and pooling operations to process input data in a more efficient manner, making it suitable for deep neural networks. Notably, GIDN achieved top-1 performance even before the introduction of our model. #### 2.2.3 Path-aware Siamese GNN Different from AGDN, Path-aware Siamese Graph Neural Network (PSG) captures information related to both nodes and edge features for a pair of nodes. Specifically, it focuses on preserving the structure information of k-neighborhoods and gathering relay path information for the nodes (Lv et al., 2022). Moreover, an innovative multi-task GNN framework is used, which incorporates self-supervised contrastive learning to distinguish between positive and negative links, all while concurrently capturing both the content and behavior of nodes. ### Elgnn In order to amalgamate the member learners into more effective and robust learners, we propose the ELGNN. We selected several top OGB models not only for their outstanding performance but also because they are designed based on different approaches. We believe that choosing either structural information or node information alone would result in information loss. Therefore, we construct an ensemble of graph neural networks by utilizing both input features and graph structures to find the optimal balance ratio for _ogbl-ddi_. Our goal is to have the model rank true drug interactions higher than non-interacting drug pairs. More specifically, we rank each true drug interaction within a set of sampled negative drug interactions. In this regard, we solely focus on finding the optimal weights based on the positive samples. Furthermore, to preserve the distinctive characteristics of each model, we perform ensemble learning at the prediction layer of each model, thus learning enhanced prediction parameters, as shown in Figure 1. Our method adopts weighted voting, where the ultimate decision is determined by a weighted combination of individual model decisions. We assign weights to each model based on their prediction scores. For a given positive node pair (\(i\), \(j\)), the objective function is as follows: \[\hat{y}_{i,j}=\max_{\{\alpha^{k}\}}(\sum_{k}\alpha^{k}\hat{y}_{ij}^ {k}), \tag{3}\] \[s.t.\quad\sum\alpha^{k}=1,\] where the \(k\) represents the model we selected for ensemble learning, which is AGDN, GIDN, and PSG. By harnessing the scores indicating link presence within these prediction regions, the objective function acquires the optimal score for link presence associated with a given input, considering both node information and structural information. In the context of hyperparameter optimization, our method employs the Tree-structured Parzen Estimator (TPE) for an efficient, adaptive, and automated exploration of hyperparameter space. Tree-Structured Parzen Estimator (TPE) is a Bayesian optimization technique commonly used for hyperparameter tuning and optimizing machine learning models (Bergstra et al., 2011). TPE builds a probabilistic model of the objective function and uses it to efficiently explore the hyperparameter space. It balances exploration and exploitation, making it particularly useful for finding the best hyperparameters for a given model. Therefore, ELGNN is an effective method to improve the model performance. ## 3 Experiments In this section, we will delve into a detailed discussion and analysis of the performance of the proposed ELGNN method. We have carried out extensive experiments to validate the superiority of our approach by assessing the improvement in prediction accuracy. ### Experimental Setup #### 3.1.1 Dataset We evaluate our ELGNN on the Open Graph Benchmark dataset, _ogbl-ddi_. The _ogbl-ddi_ dataset is a homogeneous, unweighted, undirected graph, representing the drug-drug interaction network [20]. Each node in the network represents an FDA-approved or experimental drug. The edges symbolize interactions between these drugs, indicating situations where the combined effect of taking two drugs together significantly deviates from the expected effect when the drugs act independently of each other. The prediction task involves forecasting drug-drug interactions based on available data regarding previously known drug-drug interactions. Statistics of the dataset are provided in Table 1. #### 3.1.2 Evalution Metric OGB provides standardized dataset splits and evaluators that allow for easy and reliable comparison of different models in a unified manner. For _ogbl-ddi_ dataset, the evaluation metric is a ranking metric. More specifically, we rank each true drug interaction within a set of around 100, 000 randomly sampled negative drug interactions and calculate the ratio of positive interactions that are ranked at position K or higher. This metric is referred to as Hits@K, and for the _ogbl-ddi_ dataset, K is defined as 20. #### 3.1.3 Settings The experiments are conducted under the circumstance of NVIDIA A100 GPU (80G RAM). In order to demonstrate the effectiveness of our approach, we used these models' codes which have been officially made public by OGB directly with \begin{table} \begin{tabular}{c|c c c} \hline \hline Data Split & Nodes & Positive Edges & Negative Edges \\ \hline Train & 4267 & 1067911 & – \\ Validation & – & 133489 & 101882 \\ Testing & – & 133489 & 95599 \\ \hline \hline \end{tabular} \end{table} Table 1: Dataset statistics Figure 1: An overview of ELGNN architecture. For the transductive setting, the input training graph and input test graph form the full graph. The K represents the number of model ensembles, for here, K=3. their corresponding hyperparameters. It is summarized in Table 2. ### Accuracy Improvement In this section, we demonstrate how our method incorporates various feature extraction and aggregation methods to enhance link prediction results. As shown in Table 3, ELGNN beats the state-of-the-art algorithm GIDN on the leaderboard until submission and achieves 2. 3% more performance improvement in terms of the Hits@20 Test. It indicates our ensemble learning method can significantly enhance the graph representation learning of GNN networks, and hence can enhance the prediction effectiveness of link prediction-based GNN models for dataset _ogbl-ddi_. ### Ablation Study We additionally perform an ablation study to illustrate the efficacy of ELGNN when utilizing TPE for link prediction tasks. We present the outcomes for both the baseline approach (with weight averaging) and ELGNN (optimized with TPE) under identical settings on the _ogbl-ddi_ dataset. As indicated in Table 4, TPE notably attains a 2.2% increase in performance compared to the baseline, thereby enhancing the link prediction performance to state-of-the-art levels. This underscores that our method is effective for link prediction tasks. ## 4 Conclusion In this paper, we introduce ELGNN, an ensemble learning technique designed to create a combination of random decision GNNs. ELGNN focuses on consolidating and refining predictions from multiple models, with its primary goal being to optimize the prediction score for link presence in a given input, accommodating a variety of feature extraction and aggregation approaches. What makes ELGNN unique is its efficient parallelization, allowing each base model to be independently trained and make predictions. Furthermore, this method enables the use of a diverse set of GNN models as the foundation for the ensemble. Extensive experimental results on real-world benchmark datasets consistently show that ELGNN surpasses all baseline methods, setting a new standard for link prediction performance on the _ogbl-ddi_ dataset. As part of future research, we intend to further enhance ELGNN by incorporating additional heuristic methods for link prediction and extending its applicability to various datasets. Limitations.The importance of the base model cannot be directly explained by the ensemble weights due to the difference in prediction score scales. However, this will not affect the final result when TPE is used to search the ensemble weights. ### Author Contributions Statement Z. Wong contributes to idea development, algorithm implementation, experimental design, result analysis, and paper writing. L. Yue contributes to idea development, experimental design, result analysis, and paper writing. Q. Yao contributes to idea development. All authors read, edited, and approved the paper. \begin{table} \begin{tabular}{c|c c} \hline \hline Method & Valid. & Test \\ \hline Averaging & 0.8549 & 0.9574 \\ TPE & 0.9095 & 0.9796 \\ \hline \hline \end{tabular} \end{table} Table 4: Averaging vs. TPE \begin{table} \begin{tabular}{c|c c c} \hline \hline Methods & AGDN & GIDN & PSG \\ \hline Epoches & 1000 & 1000 & 500 \\ Parameters & 3, 506, 691 & 3, 506, 691 & 3, 499, 009 \\ Num. of GNN layers & 2 & 2 & 2 \\ Num. of MLP layers & 2 & 2 & 2 \\ Num. of neg. sampling & 3 & 3 & 3 \\ Learning rate & 0.001 & 0.003 & 0.001 \\ Dim. of node emb. & 512 & 512 & 512 \\ Encoder & GAT & GAT & GraphSAGE \\ Loss function & AUC loss & AUC loss & AUC loss \\ \hline \hline \end{tabular} \end{table} Table 2: Settings of models \begin{table} \begin{tabular}{c|c c} \hline \hline Methods & Valid. Hits@20 & Test Hits@20 \\ \hline GCN & \(0.5550\pm 0.0208\) & \(0.3707\pm 0.0507\) \\ GraphsSAGE & \(0.6262\pm 0.0037\) & \(0.5390\pm 0.0474\) \\ GCN+JKNet & \(0.6776\pm 0.0095\) & \(0.6056\pm 0.0869\) \\ GraphSAGE + Edge Attr & \(0.8044\pm 0.0404\) & \(0.8781\pm 0.0474\) \\ PLNLP & \(0.8242\pm 0.0253\) & \(0.9088\pm 0.0313\) \\ PSG & \(0.8306\pm 0.0134\) & \(0.9284\pm 0.0047\) \\ AGDN & \(0.8943\pm 0.0281\) & \(0.9538\pm 0.0094\) \\ GIDN & \(0.8258\pm 0.0000\) & \(0.9542\pm 0.0000\) \\ \hline ELGNN(Ours) & \(\mathbf{0.8965\pm 0.0021}\) & \(\mathbf{0.9777\pm 0.0037}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Link Prediction Performance of our method and other GNN models on dataset _ogbl-ddi_. **Bold** indicates the second best performance.
2306.05955
Path Neural Networks: Expressive and Accurate Graph Neural Networks
Graph neural networks (GNNs) have recently become the standard approach for learning with graph-structured data. Prior work has shed light into their potential, but also their limitations. Unfortunately, it was shown that standard GNNs are limited in their expressive power. These models are no more powerful than the 1-dimensional Weisfeiler-Leman (1-WL) algorithm in terms of distinguishing non-isomorphic graphs. In this paper, we propose Path Neural Networks (PathNNs), a model that updates node representations by aggregating paths emanating from nodes. We derive three different variants of the PathNN model that aggregate single shortest paths, all shortest paths and all simple paths of length up to K. We prove that two of these variants are strictly more powerful than the 1-WL algorithm, and we experimentally validate our theoretical results. We find that PathNNs can distinguish pairs of non-isomorphic graphs that are indistinguishable by 1-WL, while our most expressive PathNN variant can even distinguish between 3-WL indistinguishable graphs. The different PathNN variants are also evaluated on graph classification and graph regression datasets, where in most cases, they outperform the baseline methods.
Gaspard Michel, Giannis Nikolentzos, Johannes Lutzeyer, Michalis Vazirgiannis
2023-06-09T15:11:49Z
http://arxiv.org/abs/2306.05955v1
# Path Neural Networks: Expressive and Accurate Graph Neural Networks ###### Abstract Graph neural networks (GNNs) have recently become the standard approach for learning with graph-structured data. Prior work has shed light into their potential, but also their limitations. Unfortunately, it was shown that standard GNNs are limited in their expressive power. These models are no more powerful than the \(1\)-dimensional Weisfeiler-Leman (\(1\)-WL) algorithm in terms of distinguishing non-isomorphic graphs. In this paper, we propose Path Neural Networks (PathNNs), a model that updates node representations by aggregating paths emanating from nodes. We derive three different variants of the PathNN model that aggregate single shortest paths, all shortest paths and all simple paths of length up to \(K\). We prove that two of these variants are strictly more powerful than the \(1\)-WL algorithm, and we experimentally validate our theoretical results. We find that PathNNs can distinguish pairs of non-isomorphic graphs that are indistinguishable by \(1\)-WL, while our most expressive PathNN variant can even distinguish between \(3\)-WL indistinguishable graphs. The different PathNN variants are also evaluated on graph classification and graph regression datasets, where in most cases, they outperform the baseline methods. Machine Learning, Graph Neural Networks, Graph Neural Networks, Graph Neural Networks ## 1 Introduction Graphs have emerged in recent years as a powerful tool for representing irregular data. Among many other applications, graphs have been used to model the relationships between individuals within a social network (Easley and Kleinberg, 2010) and the interactions between the atoms of a molecule (Stokes et al., 2020). The large availability of graph-structured data has motivated the development of machine learning algorithms that are designed for this kind of data. Notably, most of those algorithms belong either to the family of graph kernels (Kriege et al., 2020; Borgwardt et al., 2020; Nikolentzos et al., 2021) or to that of graph neural networks (GNNs) (Wu et al., 2020; Zhou et al., 2020). Due to some limitations that are inherent to kernels (e. g., they do not scale to large datasets, they struggle with continuous features, etc.), GNNs have become the most common approach for dealing with graph learning problems. GNNs have been studied extensively in the past years. So far, research in the field has mainly focused on message passing architectures (Gilmer et al., 2017). These models follow a recursive neighborhood aggregation scheme where each node aggregates the representations of its neighbors along with its own representation to compute new updated representations. It has been shown that there is a connection between the neighborhood aggregation scheme of these models and the relabeling procedure of the Weisfeiler-Leman (WL) algorithm, a well-known heuristic for testing graph isomorphism (Xu et al., 2019; Morris et al., 2019). More importantly, it was shown that the standard message passing architectures are at most as powerful as the WL algorithm in terms of distinguishing non-isomorphic graphs. This has led to the development of more complex models that focus on subgraphs (Maron et al., 2019; 20; Cotta et al., 2021). Some of these models are inspired by higher-order variants of the WL algorithm (Morris et al., 2019, 2020). Besides subgraphs, there are also other structures of graphs that can improve a model's expressive power. For instance, paths can distinguish connected from disconnected graphs, while the WL algorithm fails to distinguish this property. Unfortunately, finding all paths in a graph is NP-hard. However, some subsets of paths can be computed in polynomial time. For instance, computing shortest paths in a graph is a problem solvable in polynomial time (Dijkstra, 1959). It has been shown that models that use shortest path distances between nodes as features can provide more expressive power than the WL algorithm (Li et al., 2020). Furthermore, if we restrict the length of the paths to some small integer \(k\), the parameterized complexity of computing all the bounded length paths is tractable. In this paper, we propose Path Neural Networks (PathNNs), a GNN that generates node representations that are based on paths emanating from the nodes of graphs. By computing different subsets of paths, we derive different variants of the proposed model. We focus on subsets whose computation is possible in polynomial time, namely shortest paths and simple paths of bounded length. For each path length, the proposed model uses a recurrent layer to encode paths into vectors and then, the representations of all paths emanating from a node are aggregated to produce the node's new representation. We show that two of the three PathNN variants are strictly more powerful than the WL algorithm in terms of distinguishing non-isomorphic graphs. Our theoretical results are confirmed by experiments on synthetic datasets that measure the models' expressive power. Furthermore, we evaluate the PathNNs on real-world graph classification and regression datasets1. Our results demonstrate that the different PathNN variants achieve high levels of performance and outperform the baseline methods in most cases. Our main contributions are summarized as follows: Footnote 1: Code available at [https://github.com/gasmichel/PathNNs_expressive](https://github.com/gasmichel/PathNNs_expressive) * We develop PathNN, a neural network that computes node representations by aggregating paths of increasing length. We derive three different variants of the model that focus on single shortest paths, all shortest paths and all simple paths, respectively. * We prove that two of the variants are strictly more powerful than the WL algorithm and we empirically measure their expressive power. * We evaluate the proposed model on several graph classification and regression datasets where it achieves performance comparable to state-of-the-art GNNs. ## 2 Related Work **GNNs and kernels that process paths.** Shortest path distances between nodes have been incorporated as structural features into several GNN architectures. For instance, Graphormer encodes shortest path distances between two nodes as a bias term in the softmax attention (Ying et al., 2021), while other models annotate nodes with features that emerge from shortest paths (e. g., shortest path distances) (Li et al., 2020; You et al., 2019). On the other hand, PEGN uses shortest path distances to create persistence diagrams based on which messages between nodes are re-weighted (Zhao et al., 2020). Instead of aggregating the representations of each node's direct neighbors, some models such as SP-MPNN (Abboud et al., 2022) also consider the nodes at shortest path distance exactly \(k\) from the node. A recently proposed GNN framework, so-called Geodesic GNN (Kong et al., 2022), generates representations for pairs of nodes by aggregating the representations of the nodes on a single shortest path between the two nodes and also the direct neighbors of the two nodes that are on any of their shortest paths. Perhaps the work most related to our approach is the PathNet model (Sun et al., 2022), which also aggregates path information. However, there are major differences between PathNet and our PathNNs. PathNet samples paths instead of enumerating all of them, it is only evaluated on node classification datasets, and lastly, the authors do not provide an extensive study of the expressive power of the model. There exist also models that instead of paths simulate walks which are then either aggregated (Jin et al., 2022) or processed by a convolution neural network (Toenshoff et al., 2021). Our proposed PathNNs are also related to graph kernels which compare paths of two graphs to each other. Such kernels include the shortest path kernel (Borgwardt and Kriegel, 2005) and the GraphHopper kernel (Feragen et al., 2013). The former just compares shortest path distances to each other, while the latter is more similar to the proposed model since it also takes into account the attributes of the nodes that appear on a shortest path. **Expressive power of GNNs.** Since we will study the expressive power of our PathNNs, our work is furthermore related to the extensive literature exploring the expressive power of GNNs. Several of those studies have investigated the connections between GNNs and the WL test of isomorphism (Barcelo et al., 2020; Geerts and Reutter, 2022). For instance, standard GNNs were shown to be at most as powerful as the WL algorithm in terms of distinguishing non-isomorphic graphs (Morris et al., 2019; Xu et al., 2019). Other studies capitalized on high-order variants of the WL algorithm to derive new models that are more powerful than standard GNNs (Morris et al., 2019, 2020). Another line of research focused on \(k\)-order graph networks (Maron et al., 2019). Importantly, \(k\)-order graph networks were found to be at least as powerful as the \(k\)-WL graph isomorphism test in terms of distinguishing non-isomorphic graphs, while a reduced \(2\)-order network containing just a scaled identity operator, augmented with a single quadratic operation was shown to have \(3\)-WL discrimination power (Maron et al., 2019). Furthermore, it was shown that a GNN with a maximum tensor order \(2\) defined on the ring of equivariant functions can distinguish some pairs of non-isomorphic regular graphs with the same degree (Chen et al., 2019). Graph isomorphism testing and invariant function approximation, the two main perspectives for studying the expressive power of GNNs, have been shown to be equivalent to each other (Chen et al., 2019). Recently, a large body of work focused on making GNNs more powerful than WL, e. g., by encoding vertex identifiers (Vignac et al., 2020), taking into account all possible node permutations (Murphy et al., 2019; Dasoulas et al., 2020), using random features (Sato et al., 2021; Abboud et al., 2021), using node features (You et al., 2021), spectral information (Balcilar et al., 2021), simplicial and cellular complexes (Bodnar et al., 2021b;a) and directional information (Beaini et al., 2021). Furthermore, several recent studies extract and process subgraphs to make GNNs more expressive (Nikolentzos et al., 2020; Bevilacqua et al., 2022). For example, the expressive power of GNNs can be increased by aggregating the representations of subgraphs (produced by standard GNNs) that arise from removing one or more vertices from a given graph (Cotta et al., 2021; Papp et al., 2021). Finally, it was recently shown that models that process each node's \(k\)-hop neighborhood and which aggregate nodes at shortest path distance exactly \(k\) from that node (such as the SP-MPNN model (Abboud et al., 2022)) are more powerful than standard GNNs (Feng et al., 2022), while the expressive power of those models can be further improved by taking into account the edges that connect the nodes at shortest path distance exactly \(k\) from the considered node. ## 3 Path Neural Networks In this section, we begin by introducing some key notation for graphs which will be used later, and then we describe Path Neural Networks (PathNNs), a message-passing GNN that learns representations of paths of increasing length which are aggregated to update node representations. We first introduce the theoretical framework of PathNNs and then detail their architecture. ### Notation Let \(\{\!\!\}\) denote a multiset, i. e., a generalized concept of a set that allows multiple instances for its elements. Let \(G=(V,E)\) be an undirected graph consisting of a set of nodes \(V\) and a set of edges \(E\subseteq V\times V\). We denote by \(n\) the number of nodes in \(G\) and by \(m\) its number of edges. The set \(\mathcal{N}(v)\) represents the neighbors of vertex \(v\). When considering attributed graphs, each vertex \(v\in V\) is endowed with an initial node feature vector denoted by \(\mathbf{x}_{v}\) that can contain categorical or real-valued properties of \(v\). A path from source node \(v\) to target node \(u\) is denoted by \(\pi=[v_{1},v_{2},\ldots,v_{k}]\) such that \(v_{1}=v\), \(v_{k}=u\) and \((v_{i},v_{i+1})\in E\) for \(i\in\{1,\ldots,k-1\}\). Note that paths only contain distinct vertices. We denote by \(\pi(j)=v_{j}\) the \(j\)-th node encountered when hopping along the path, and by \(|\pi|=k-1\) its length, defined by the number of edges it contains. In this work, we consider various collections of paths. The first collection is the set of shortest paths denoted as \(\mathcal{SP}\) containing a single shortest path for all possible node pair combinations. Multiple shortest paths between a source node and a target node might exist. We thus also consider the collection of all shortest paths that we denote as \(\mathcal{SP}^{+}\), containing all possible shortest paths between every node pair combinations. A simple path can be any path, not necessarily the shortest, from a source node to a target node. We denote by \(\mathcal{AP}\) the collection of all simple paths between node pair combinations. Figure 1 provides an example of these three collections of paths. Note that, we have \(\mathcal{SP}\subseteq\mathcal{SP}^{+}\subseteq\mathcal{AP}\). In practice, we only consider paths up to a fixed length \(k\). We use the notation \(\mathcal{P}^{k}_{v}=\{\pi\in\mathcal{P}:\;\pi(1)=v,\;|\pi|=k\}\) for \(\mathcal{P}\in\{\mathcal{SP},\mathcal{SP}^{+},\mathcal{AP}\}\) to denote all paths of length \(k\) contained in \(\mathcal{P}\) starting from node \(v\). Similarly, \(\mathcal{P}_{v}\) denotes the set of all paths in \(\mathcal{P}\in\{\mathcal{SP},\mathcal{SP}^{+},\mathcal{AP}\}\) starting from node \(v\) of any length. ### Theoretical Framework In this section, we assume that for any graph \(G\), we have the collections of all paths of maximum length \(K\) at our disposal for all nodes in \(G\). We furthermore consider any collection of paths \(\mathcal{P}\in\{\mathcal{SP},\mathcal{SP}^{+},\mathcal{AP}\}\) that is induced by a graph \(G\). All proofs can be found in Appendix A. We begin by introducing the concept of _WL-Trees_ and _Path-Trees_. Path-Trees are an intuitive path-based, rather than walk-based, analogue to WL-Trees. We will then show that Path-Trees are not able to distinguish graphs at a node-level for all collections of paths, which will motivate us to instead define a model operating on annotated sets of paths, which we will show to disambiguate graphs at least as well as WL-Trees. WL-Trees of a given root node are constructed one level Figure 1: (a) A graph G with \(n=6\) vertices. (b) Shortest paths of length up to \(2\) starting from vertex \(v_{1}\), (c) all shortest paths of length up to \(2\) starting from vertex \(v_{1}\) and (d) all paths of length up to \(3\) starting from vertex \(v_{1}\). at a time. At each iteration, level \(k+1\) of a WL-Tree is constructed by setting the children of any node at level \(k\) to its direct neighbors. In the context of the WL algorithm the color of a node \(v\) at iteration \(k\) of the 1-WL algorithm represents a tree structure of height \(k\) rooted at \(v\). The WL-Tree of the graph in Figure 1(a) is provided in Figure 2(a). Similarly, we define Path-Trees, where we make use of the notation \(\mathcal{T}\) to denote all Path-Trees, and \(\mathcal{T}^{k}\) to denote Path-Trees of height \(k\). **Definition 3.1**.: (Path-Tree rooted at \(v\)) Let \(G=(V,E)\) and \(\mathcal{P}\in\{\mathcal{SP},\mathcal{SP}^{+},\mathcal{AP}\}\) be a collection of any type-specific paths in \(G\). A Path-Tree \(P_{v}\subseteq\mathcal{T}\), induced by \(\mathcal{P}_{v}\) for \(v\in V,\) is a tree such that the node set at level \(k\) of the tree is equal to the multisets of nodes that occur at position \(k\) in the paths in \(\mathcal{P}_{v}^{k}\), i.e., \(\{\!\!\{u:\pi(k)=u\text{ for }\pi\in\mathcal{P}_{v}^{k}\}\!\!\}\). Nodes at level \(k\) and level \(k+1\) in the tree are connected if and only if they occur in adjacent positions \(k\) and \(k+1\) in a given path in the set of paths \(\mathcal{P}_{v}\), i.e., \((x,y)\in P_{v}\) if \(\pi(k)=x\) and \(\pi(k+1)=y\) for any \(\pi\in\mathcal{P}_{v}\) such that each node at level \(k+1\) is connected only to a single node at level \(k\). The height \(k\) Path-Tree \(P_{v}^{k}\subseteq\mathcal{T}^{k}\) rooted at \(v\) corresponds to the Path-Tree of \(v\) pruned from all levels \(l>k\). Different collections of paths lead to different Path-Trees, as shown in Figure 2. To give the reader a more practical understanding of Path-Trees we now explain how a \(k\) Path-Tree can be constructed from any collection of paths \(\mathcal{P}\in\{\mathcal{SP},\mathcal{SP}^{+},\mathcal{AP}\}\). We iteratively build the Path-Tree for a given source node \(v\) one level \(k\in\{1,\ldots,K\}\) at a time, where we make use of the subset \(\mathcal{P}_{v}^{k}\) of \(\mathcal{P}_{v}\subseteq\mathcal{P}\) to construct level \(k\) of the Path-Tree. In the first iteration we add the root node \(v\) to the Path-Tree. Then, at subsequent iterations we add the multiset of nodes at position \(k\) in the paths in \(\mathcal{P}_{v}^{k}\) to the Path-Tree. We then iteratively connect these added nodes to the Path-Tree via edges, where a given node \(u\) of the added nodes is connected to a single leaf \(w\) in the existing Path-Tree such that the ordered set of ancestors of \(w\) in the Path-Tree is identical to the ordered set of nodes preceding \(u\) in its path of length \(k\) from which the addition of \(u\) resulted. Intuitively, for a given graph \(G\) and a given node \(v\) the \(\mathcal{SP}\)-Tree rooted at \(v\) is a subgraph of the \(\mathcal{SP}^{+}\)-Tree which is itself a subgraph of the \(\mathcal{AP}\)-Tree. Another interesting property of Path-Trees is provided in the following lemma. **Lemma 3.2**.: _Let \(G=(V,E)\) and \(\mathcal{P}\in\{\mathcal{SP},\mathcal{SP}^{+},\mathcal{AP}\}\) be any collection of paths in \(G\). Let \(P_{v}^{k}\) be the Path-Tree rooted at \(v\) of height \(k\) and \(W_{v}^{k}\) be the WL-Tree rooted at \(v\) of height \(k\) for \(v\in V\). Then \(P_{v}^{k}\) is a subgraph of \(W_{v}^{k}\)._ Lemma 3.2 can be easily proved by noting that a WL-Tree rooted at node \(v\) of height \(k\) is equivalent to an enumeration of all walks that start at \(v\) of length up to \(k\). Since paths are a special case of walks, Path-Trees are subgraphs of WL-Trees, for all collections of paths. Each type of Path-Trees contain a different type of structural information over the input graph. \(\mathcal{SP}\) and \(\mathcal{SP}^{+}\) Trees contain the most compressed information over shortest paths. In particular, they can only answer the question of _what is the shortest path distance between a given node and any other node in \(G\)?_ The \(\mathcal{AP}\)-Trees encode the largest amount of structural information over the input graph. Besides, they are not height-limited by the graph's diameter. However, they usually grow exponentially with the height \(k\) and the density of the input graph. We now state our main theorem on the relation between \(\mathcal{AP}\)-Trees and WL-Trees. **Theorem 3.3**.: _Let \(G=(V,E)\) and \(\mathcal{AP}\) be the collection of all simple paths in \(G\). Let \(\{AP_{v}^{k},AP_{u}^{k}\}\subseteq\mathcal{T}^{k}\) be the \(\mathcal{AP}\)-Trees of height \(k\) rooted at \(v\) and \(u\), respectively, and \(W_{v}^{k}\), \(W_{u}^{k}\) be the WL-Trees of height \(k\) rooted at \(v\) and \(u\), respectively, for \(v,u\in V\). If \(W_{v}^{k}\) is structurally different (i. e., not isomorphic) than \(W_{u}^{k}\), then \(AP_{v}^{k}\) is structurally different than \(AP_{u}^{k}\). Additionally, \(AP_{v}^{k}\) and \(AP_{u}^{k}\) can be different even if \(W_{v}^{k}\) and \(W_{u}^{k}\) are identical._ Theorem 3.3 states that, if at iteration \(k\) 1-WL decides that two nodes have different colors, then their \(\mathcal{AP}\)-Trees are structurally different. Additionally, it states that \(\mathcal{AP}\)-Trees are also able to disambiguate nodes that the 1-WL algorithm would determine to be structurally similar. Hence, a model \(f:\mathcal{AP}\rightarrow\mathbb{R}^{d}\) that embeds \(\mathcal{AP}\)-Trees into \(d\)-dimensional Figure 2: (a) WL-Tree rooted at node \(v_{1}\) for the graph of Figure 1(a). (b) One of the two \(\mathcal{SP}\)-Trees rooted at \(v_{1}\) of height \(2\). (c) \(\mathcal{SP}^{+}\)-Tree rooted at \(v_{1}\) of height \(2\) and (d) \(\mathcal{AP}\)-Tree rooted at \(v_{1}\) of height \(3\). Note that we consider only \(\mathcal{SP}\) and \(\mathcal{SP}^{+}\)-Trees of height up to \(2\) because there is no shortest path of length higher than \(2\) that starts at \(v_{1}\) in the input graph. vectors, such that non-isomorphic trees are embedded into different vectors, also equipped with an injective readout function is more expressive than the 1-WL algorithm. Note that only considering \(\mathcal{AP}\)-Trees up to a fixed height \(K\) is sufficient to distinguish a broad class of graphs. Now that we have considered the case of all paths \(\mathcal{AP}\), we turn our attention to the case of the shortest paths in \(\mathcal{SP}\) and \(\mathcal{SP}^{+}\). In Proposition 3.4 we show that \(\mathcal{SP}\)-Trees and \(\mathcal{SP}^{+}\)-Trees are unable to capture all differences in WL-Trees. **Proposition 3.4**.: _Let \(G=(V,E)\) and \(\mathcal{P}\) be the collection of either single shortest paths \(\mathcal{SP}\) or all shortest paths \(\mathcal{SP}^{+}\) in \(G\). Let \(\{P^{k}_{v},P^{k}_{w}\}\subseteq\mathcal{T}^{k}\) be the \(\mathcal{P}\)-Trees of height \(k\) rooted at \(v\) and \(u\), respectively, and \(W^{k}_{v}\), \(W^{k}_{u}\) be the WL-Trees of height \(k\) rooted at \(v\) and \(u\), respectively, for \(v,u\in V\). If \(W^{k}_{v}\) is structurally different than \(W^{k}_{u}\), then \(P^{k}_{v}\) is not necessarily structurally different from \(P^{k}_{u}\)._ Hence, Path-Trees based on the collections of shortest paths do not necessarily allow us to disambiguate individual nodes structurally even when the WL test does so. We therefore propose to operate on annotated sets of paths instead of Path-Trees. We use \(\tilde{\mathcal{SP}},\tilde{\mathcal{SP}}^{+}\) and \(\tilde{\mathcal{AP}}\) to denote annotated paths in the single shortest path, all shortest path and all simple path collections, respectively. The annotations of nodes in \(\tilde{\mathcal{P}}\in\{\tilde{\mathcal{SP}},\tilde{\mathcal{SP}}^{+},\tilde{ \mathcal{AP}}\}\) depend on the length of the considered path. For paths of length 1, i.e., \(\tilde{P}^{1}\in\tilde{\mathcal{P}}\), all nodes have equal annotations. For paths of length 2, i.e., \(\tilde{P}^{2}\in\tilde{\mathcal{P}}\), all nodes \(v\) in these paths are annotated with hashes of their respective annotated path sets of length 1, i.e., \(\tilde{P}^{1}_{v}\). In general for paths of length \(k\), i.e., \(\tilde{P}^{k}\in\tilde{\mathcal{P}}\), all nodes \(v\) in these paths are annotated with hashes of their respective annotated path sets of length \(k-1\), i.e., \(\tilde{P}^{k-1}_{v}\). Each annotation of paths of length \(k>2\), is therefore a multiset of sequences of annotations. In Theorem 3.5 we demonstrate that any annotated path set \(\tilde{\mathcal{P}}\in\{\tilde{\mathcal{SP}},\tilde{\mathcal{SP}}^{+},\tilde{ \mathcal{AP}}\}\) allows us to disambiguate individual nodes structurally at least as well as the WL test. Consequently, any model composed of injective functions that operates on annotated sets of paths, which notably includes the PathNNs that we will define in Section 3.3, is able to disambiguate graphs at least as well as the WL algorithm. **Theorem 3.5**.: _Let \(G=(V,E)\) and \(\tilde{\mathcal{P}}\in\{\tilde{\mathcal{SP}},\tilde{\mathcal{SP}}^{+},\tilde{ \mathcal{AP}}\}\) denote an annotated set of paths, in which every node is annotated according to a recursive annotation scheme, in which a node \(v\) occurring in a path of length \(k\) is annotated by the hash of \(v\)'s annotated path sets of length \(k-1\), i.e., \(\tilde{P}^{k-1}_{v}\) and the initial annotations of all nodes are equal. Let \(\tilde{P}^{k}_{v}\) and \(\tilde{P}^{k}_{u}\) be the annotated sets of paths of length \(k\) emanating from nodes \(v,u\in V\), respectively, and \(W^{k}_{v}\), \(W^{k}_{u}\) be the WL-Trees of height \(k\) rooted at \(v\) and \(u\), respectively, for \(v,u\in V\)._ _Then, if \(W^{k}_{v}\) and \(W^{k}_{u}\) are unequal, then \(\tilde{P}^{k}_{v}\) and \(\tilde{P}^{k}_{u}\) are unequal. Additionally, \(\tilde{P}^{k}_{v}\) and \(\tilde{P}^{k}_{v}\) can be different even if \(W^{k}_{v}\) and \(W^{k}_{v}\) are identical._ We want to remark that depending on the particular shortest path that is sampled in the single shortest path collection \(\tilde{\mathcal{SP}}\) it is possible that even isomorphic nodes have different annotated sets of paths. For \(\tilde{\mathcal{SP}}^{+}\) and \(\tilde{\mathcal{AP}}\) isomorphic nodes always have corresponding identical annotated sets of paths. Therefore, models operating on \(\tilde{\mathcal{SP}}^{+}\) and \(\tilde{\mathcal{AP}}\) are strictly more powerful than the WL algorithm for graph isomorphism. While models operating on \(\tilde{\mathcal{SP}}\) can only disambiguate graphs at least as well as the WL algorithm and are not strictly more powerful, since also isomorphic graphs could be mapped to different representations. ### Model Architecture We adopt a message passing scheme that iteratively updates node representations using paths of increasing length. This message passing scheme allows us to refine sets of paths with additional structural information included in paths of length smaller than \(K\). Noting that paths are ordered sequences of nodes, PathNNs learn to embed paths into \(d\)-dimensional feature vectors, using a function operating on sequences \(f:\mathbb{R}^{k\times d}\rightarrow\mathbb{R}^{d},\) where \(k\) is the path length. PathNNs employ a message-passing scheme that aggregates path embeddings to form updated node representations. Specifically, at each layer \(k\), paths of length \(k\) are embedded by \(f\) and aggregated. Let \(\pi\) be any path of length \(k\). Then, PathNNs compute the embedding of \(\pi\) by using the node representations of layer \(k-1\) \[\mathbf{h}^{(k)}_{\pi}=f\left(\left[\mathbf{h}^{(k-1)}_{\pi(k+1)},\ldots, \mathbf{h}^{(k-1)}_{\pi(1)}\right]\right). \tag{1}\] PathNNs start to encode paths of length \(1\) in the first layer (i. e., sequences of a node and its neighbors' representations), then iteratively encode paths of increasing length. Path representations form the message in the message passing architecture and are used to update node feature vectors in a similar way to traditional MPNNs \[\mathbf{a}^{(k)}_{v} =\text{AGGREGATE}^{(k)}\left(\left\{\left\{\mathbbm{h}^{(k)}_{ \pi}|\pi\in\mathcal{P}^{k}_{v}\right\}\right\}\right),\] \[\mathbf{h}^{(k)}_{v} =\text{COMBINE}^{(k)}\left(\mathbf{h}^{(k-1)}_{v},\,\mathbf{a}^{(k) }_{v}\right).\] Paths of length up to \(K\) are given as input to PathNNs by stacking \(K\) layers. Figure 3 illustrates the functioning of PathNNs. Graph representations are obtained using a permutation-invariant readout function \[\mathbf{h}_{G}=\text{READOUT}\left(\left\{\mathbbm{h}^{(K)}_{v}|v\in V \right\}\right).\] Without loss of generality, we propose to use a simple instance of PathNNs in this paper. The function \(f\) operating on sequences is modeled and learned by a Long-Short-Term-Memory (LSTM) cell (Hochreiter and Schmidhuber, 1997). LSTMs are a valid choice for the function \(f\) thanks to the universal approximation results for Recurrent Neural Network (Hammer, 2000): they can approximate any injective sequence function arbitrarily well in probability, which is a desirable and necessary property for \(f\). Note that the last element in the sequence is the starting node representation. The LSTM operates on reversed paths as we hypothesize that the most important information in the sequence is the starting node's current representation. We project node initial feature vectors to match the LSTM's hidden dimension with a 2-layer Multilayer Perceptron (MLP). Using paths embeddings, PathNNs then update node representations using the following rule \[\mathbf{g}_{v}^{(k)} =\text{NORM}^{(k)}\left(\mathbf{h}_{v}^{(k-1)}+\sum_{\pi\in \mathcal{P}_{v}^{k}}\mathbf{h}_{\pi}^{(k)}\right), \tag{2}\] \[\mathbf{h}_{v}^{(k)} =\phi\left(\mathbf{g}_{v}^{(k)}\right). \tag{3}\] Noting that the collection of paths \(\mathcal{P}_{v}^{k}\) grows larger with graph density, the right hand side of Equation (2) can be of very high magnitude, leading to numerical instabilities. A Batch Normalization (BN) layer (Ioffe and Szegedy, 2015) is thus applied in Equation (2) to avoid such situations during training. After normalization, node representations are passed through a \(\phi\) function, which can be either the identity function or a 2-layer MLP. Both functions are a valid choice as sum and 2-layer MLPs are injective functions over multisets (Xu et al., 2019). Equipped with the identity aggregation function, the number of parameters of PathNNs only slightly increases with path length (caused by BN parameters), resulting in a low number of trainable parameters even for higher path lengths. Replacing the identity function with an MLP allows finer updated node representations but leads to higher time and memory complexity. Finally, PathNNs use the sum over the multiset of final node representations as a readout function to produce a vector representation of the entire graph \[\mathbf{h}_{G}=\sum_{v\in V}\mathbf{h}_{v}^{(K)}. \tag{4}\] ### Time and Space Complexity To enumerate all shortest paths and all simple paths of length up to \(k\) from a source vertex to all other vertices, we use the depth-first search (DFS) algorithm. The time complexity of the algorithm is at most \(\mathcal{O}(b^{k})\), where \(b\) is the branching factor of the graph which is upper bounded by the maximum of the nodes' degrees. Thus, for all the nodes of the graph, the time complexity is \(\mathcal{O}(nb^{k})\). The space complexity is \(\mathcal{O}(nbk)\) if duplicate nodes are not eliminated. Real-world graphs are often small (e. g., molecules) and/or sparse (e. g., social networks), i.e., \(n\) is often small and/or \(b\ll n\). Thus, for bounded \(k\), the time and space complexity for enumerating the paths is not prohibitive. The running time of the model depends on the number of paths which is \(\mathcal{O}(nb^{k})\). Typically, the larger the value of \(k\), the larger the number of paths and therefore, the complexity of our model increases as a function of \(k\). For \(k=1\), the complexity of our model is comparable to that of standard GNNs which aggregate \(2m\approx nb\) representations. However, for larger values of \(k\), the time complexity of our model is greater than that of standard GNNs which only aggregate Figure 3: Aggregation process of PathNN with \(\mathcal{P}=\mathcal{SP}\) for two iterations. Node colors corresponds to node feature vectors. \(2m\) representations in all neighborhood aggregation layers. We report the empirical running times of our PathNNs on two real-world datasets in Appendix B. ## 4 Experimental Evaluation We now evaluate the performance of our PathNNs in synthetic experiments specifically designed to exhibit the expressiveness of GNNs in Section 4.1 and on a range of real-world datasets in Section 4.2. ### Synthetic Datasets **Datasets.** We use \(3\) publicly available datasets: (1) the Circular Skip Link (CSL) dataset (Murphy et al., 2019); (2) the EXP dataset (Abboud et al., 2021); (3) the SR dataset. **Experimental setup.** For all experiments, the aggregation function \(\phi\) of Equation (3) is set to the identity function and the normalization layer of Equation (2) is removed. We set initial node features to be vectors of ones, and process them using a \(2\)-layer MLP. A 1-layer MLP is applied to the final graph representation to generate predictions. The benchmarks CSL and EXP-Class require the classification of graphs into isomorphism classes. To evaluate the model's performance, we used \(5\)-fold cross validation on CSL and \(4\)-fold cross validation on EXP-Class. Graphs contained in the CSL dataset present a maximum diameter of \(11\). We thus set \(K\) to \(11\) for PathNN-\(\mathcal{SP}\) and PathNN-\(\mathcal{SP}^{+}\). \(K\) is set to \(5\) for PathNN-\(\mathcal{AP}\). As EXP graphs are disconnected, we empirically set \(K\) to \(10\) for PathNN-\(\mathcal{SP}\) and PathNN-\(\mathcal{SP}^{+}\) and \(5\) for PathNN-\(\mathcal{AP}\) since these values allow the models to achieve perfect performance in this task. For both experiments and all models, we train for \(200\) epochs using the Adam optimizer with learning rate \(10^{-3}\). The hidden dimension size is set to \(64\). Batch sizes are set to \(32\) except for PathNN-\(\mathcal{AP}\) where we set it to \(8\) for CSL and \(16\) for CEXP to be able to fit all paths in memory. For PathNN-\(\mathcal{AP}\), we apply Euclidean normalization before feeding the representations to the LSTM. Euclidean normalization is used instead of BN because the latter results in training instabilities. All PathNN-\(\mathcal{AP}\) models are trained with distance encoding. For the SR and EXP-Iso datasets, we investigate whether the proposed models have the right inductive bias to distinguish these pairs of graphs. Similarly to Bodnar et al. (2021), we consider two graphs isomorphic if the Euclidean distance between their representation is below a fixed threshold \(\varepsilon\). Graph representations are computed by an untrained model variant where the prediction layer is removed. We remove the normalization layer of Equation (2) and instead apply Euclidean normalization to the LSTM's input. For EXP-Iso, we use the same \(K\) as in the experiments on EXP-Class. For the SR datasets, we set \(K\) to \(4\) as retrieving all paths of lengths higher than \(4\) is computationally challenging on these densely connected graphs. Hidden dimension size is set to \(16\) and \(\varepsilon\) to \(10^{-5}\). The experiment is repeated with \(5\) different seeds. **Results.** The results are given in Table 1 and in Figure 4. We observe that the two most expressive variants of the proposed model, namely Path-\(\mathcal{SP}^{+}\) and Path-\(\mathcal{AP}\), can distinguish all \(10\) isomorphism classes present in the CSL dataset. On the other hand, PathNN-\(\mathcal{SP}\) fails to distinguish one of the \(10\) classes, but still significantly outperforms the GIN model. All three variants of the proposed model achieve perfect accuracy on EXP-Class and never fail on the EXP-Iso dataset. Those datasets contain graphs that are not distinguishable by 1-WL, and thus, not surprisingly, GIN maps the graphs of each pair to the same vector. Finally, we evaluate the most expressive variant of our model (i. e., PathNN-\(\mathcal{AP}\)) on the SR dataset whose instances contain graphs not distinguishable by 3-WL. We can see in Figure 4 that in most cases, PathNN-\(\mathcal{AP}\) can successfully distinguish more than \(80\%\) of the graphs contained in each instance of the dataset. SR\((29,14,6,7)\) seems to be particularly hard for PathNN-\(\mathcal{AP}\) since its failure rate on this instance is high compared to its performance on other instances. Overall, our experiments confirm the high expressiveness of the proposed model in terms of distinguishing non-isomorphic graphs. \begin{table} \begin{tabular}{l c c c} \hline \hline Model & **CSL \(\uparrow\)** & **EXP-Class \(\uparrow\)** & **EXP-Iso \(\downarrow\)** \\ \hline GIN (Xu et al., 2019) & \(10.0\pm 0.0\) & \(50.0\pm 0.0\) & 600 \\ 3WLGNN (Maron et al., 2019) & \(97.8\pm 10.9\) & \(\textbf{100.0}\pm 0.0\) & **0** \\ \hline PathNN-\(\mathcal{SP}\) & \(90.0\pm 0.0\) & \(\textbf{100.0}\pm 0.0\) & **0** \\ PathNN-\(\mathcal{SP}^{+}\) & \(\textbf{100.0}\pm 0.0\) & \(\textbf{100.0}\pm 0.0\) & **0** \\ PathNN-\(\mathcal{AP}\) & \(\textbf{100.0}\pm 0.0\) & \(\textbf{100.0}\pm 0.0\) & **0** \\ \hline \hline \end{tabular} \end{table} Table 1: Test set classification accuracy (CSL, EXP-Class) and number of undistinguished pairs of graphs (EXP-Iso). Best results are highlighted in bold. Figure 4: Failure rate on different instances of the SR dataset. ### Real-World Datasets **Datasets.** We evaluate the proposed model on \(6\) datasets contained in the TUDataset collection (Morris et al., 2020): DD, NCI1, PROTEINS, ENZYMES, IMDB-B and IMDB-M. We also evaluate the proposed model on ogbg-molhiv, a molecular property prediction dataset from the Open Graph Benchmark (OGB) (Hu et al., 2020). We conduct an experiment on the ZINC 12K dataset (Dwivedi et al., 2020). Finally, we experiment with Peptides-struct and Peptides-func (Dwivedi et al., 2022), two datasets that require long-range dependencies between nodes to be captured. **Experiment setup.** In all experiments, we use 2-layer MLPs with BN to map initial node representations to the desired dimension, and 2-layer MLPs without BN for prediction. All PathNN-\(\mathcal{AP}\) are trained with distance encoding. Following Errica et al. (2020), we evaluate TUDatasets using a \(10\)-fold cross validation using their provided data splits. We let \(\phi\) be a ReLU non-linearity and choose hidden dimension size from \(\{32,64\}\). We apply dropout to the input of the LSTM layer and between the first and second layer of the final MLP. The dropout rate is chosen from \(\{0,0.5\}\). We run the experiment for various values of \(K\in\{1,2,3\}\) to analyze the effect of increased path length on performance. The batch size is set to \(32\) for all datasets and all values of \(K\) except for simple paths on DD with \(K=2\) where we had to decrease batch size to \(16\) because of memory constraints. The Adam optimizer is used with fixed learning rate \(0.001\). The diameter of the graphs contained in IMDB-B and IMDB-M is at most equal to \(2\). Thus, for those two datasets, we do not provide results for paths of length up to \(K=3\). Similarly to Errica et al. (2020), we use one-hot encodings of given node attributes for all datasets except IMDB-B and IMDB-M, where we instead use one-hot encodings of node degrees. We also include the \(18\) continuous node feature vectors available for ENZYMES. We fit each model for \(500\) epochs and stop training if validation accuracy does not increase for \(250\) epochs. All other experiments are conducted using available data splits. We set \(\phi\) to a 2-layer MLP with BN for all experiments and set \(K\) and the layer's hidden size to respect the \(500K\) parameter budget for ZINC, Peptides-functional and Peptides-structural. Details about the hyperparameter configuration can be found in Appendix D. All of these datasets contain additional edge features. Similarly to the Distance aware LSTM cell described in Appendix C.1, we build an Edge LSTM cell that takes as input a sequence of node and edge representations. For \(\mathcal{AP}\), the LSTM cell encodes both edges in the path and distance from the central node. Further detail can be found in Appendix E. **Results.** Table 2 illustrates the classification accuracy achieved by the proposed PathNNs and the baselines on the six datasets from the TUDataset collection. We observe that our PathNNs outperform the baselines on \(5\) out of the \(6\) datasets. When \(K=1\), all three variants of our model are identical, since all path variants are identical when paths of length one are considered. We denote this variant by PathNN-\(\mathcal{P}\). On most datasets, PathNN-\(\mathcal{P}\) provides the highest accuracy. This is not surprising for IMDB-B and IMDB-M since the graphs contained in these datasets are ego-networks of radius \(2\). On the other hand, it appears that on DD and NCI1, more global information needs to be captured. On these two datasets, models that consider paths of length up to \(K=3\) achieve the highest accuracy. On some datasets, the proposed model significantly outperforms the baselines. Notably, on the ENZYMES, NCI1 and \begin{table} \begin{tabular}{l|c c c c c c} \hline \hline & **DD** & **PROTEINS** & **NCI1** & **ENZYMES** & **IMDB-B** & **IMDB-M** \\ \hline DGCNN (Zhang et al., 2018) & \(76.6\pm 4.5\) & \(72.9\pm 3.5\) & \(76.4\pm 1.7\) & \(38.9\pm 5.7\) & \(69.2\pm 3.0\) & \(45.6\pm 3.4\) \\ DiffPool (Ying et al., 2018) & \(75.0\pm 4.3\) & \(73.7\pm 3.5\) & \(76.9\pm 1.9\) & \(59.5\pm 5.6\) & \(68.4\pm 3.3\) & \(45.6\pm 3.4\) \\ ECC (Simonovsky \& Komodakis, 2017) & \(72.6\pm 4.1\) & \(72.3\pm 3.4\) & \(76.2\pm 1.4\) & \(29.5\pm 8.2\) & \(67.7\pm 2.8\) & \(43.5\pm 3.1\) \\ GIN (Xu et al., 2019) & \(75.3\pm 2.9\) & \(73.3\pm 4.0\) & \(80.0\pm 1.4\) & \(59.6\pm 4.5\) & \(71.2\pm 3.9\) & \(48.5\pm 3.3\) \\ GraphsSAGE (Hamilton et al., 2017) & \(72.9\pm 2.0\) & \(73.0\pm 4.5\) & \(76.0\pm 1.8\) & \(58.2\pm 6.0\) & \(68.8\pm 4.5\) & \(47.6\pm 3.5\) \\ GAT (Velickovic et al., 2018) & \(73.9\pm 3.4\) & \(70.9\pm 2.7\) & \(77.3\pm 2.5\) & \(49.5\pm 8.9\) & \(69.2\pm 4.8\) & \(48.2\pm 4.9\) \\ SPN (\(K=1\)) (Abboud et al., 2022) & \(72.7\pm 2.6\) & \(71.0\pm 3.7\) & \(80.0\pm 1.5\) & \(67.5\pm 5.5\) & NA & NA \\ SPN (\(K=5\)) (Abboud et al., 2022) & \(77.4\pm 3.8\) & \(74.2\pm 2.7\) & \(78.6\pm 3.8\) & \(69.4\pm 6.2\) & NA & NA \\ PathNet (\(N=10\), \(K=2\)) (Sun et al., 2022) & \(\text{OM}\) & \(70.5\pm 3.9\) & \(64.1\pm 2.3\) & \(69.3\pm 5.4\) & \(70.4\pm 3.8\) & \(49.1\pm 3.6\) \\ Nested GNN (Zhang \& Li, 2021) & \(\text{\boldmath{77.8}}\pm 3.9\) & \(74.2\pm 3.7\) & NA & \(31.2\pm 6.7\) & NA & NA \\ \hline PathNN-\(\mathcal{P}\) (\(K=1\)) & \(76.9\pm 3.7\) & \(\text{\boldmath{75.2}}\pm 3.9\) & \(77.5\pm 1.6\) & \(\text{\boldmath{73.0}}\pm 5.2\) & \(\text{\boldmath{72.6}}\pm 3.3\) & \(\text{\boldmath{50.8}}\pm 4.5\) \\ PathNN-\(\mathcal{SP}\) (\(K=2\)) & \(75.3\pm 2.7\) & \(73.1\pm 3.1\) & \(82.0\pm 1.6\) & \(71.6\pm 6.4\) & \(70.8\pm 3.5\) & \(50.0\pm 4.1\) \\ PathNN-\(\mathcal{SP}\) (\(K=3\)) & \(77.0\pm 3.1\) & \(72.2\pm 2.7\) & \(82.2\pm 1.7\) & \(69.2\pm 4.7\) & - & - \\ PathNN-\(\mathcal{SP}^{+}\) (\(K=2\)) & \(74.7\pm 3.0\) & \(73.1\pm 3.7\) & \(81.0\pm 1.4\) & \(72.5\pm 5.3\) & \(70.5\pm 3.4\) & \(50.7\pm 4.5\) \\ PathNN-\(\mathcal{SP}^{+}\) (\(K=3\)) & \(76.5\pm 4.6\) & \(73.2\pm 3.3\) & \(\text{\boldmath{83.2}}\pm 1.9\) & \(70.4\pm 3.1\) & - & - \\ PathNN-\(\mathcal{AP}\) (\(K=2\)) & \(75.0\pm 4.4\) & \(73.1\pm 4.9\) & \(81.3\pm 1.8\) & \(71.8\pm 4.8\) & \(71.7\pm 3.6\) & \(49.8\pm 4.2\) \\ PathNN-\(\mathcal{AP}\) (\(K=3\)) & \(\text{OOM}\) & \(73.1\pm 4.0\) & \(82.3\pm 1.7\) & \(69.0\pm 5.3\) & OOM & OOM \\ \hline \hline \end{tabular} \end{table} Table 2: Classification accuracy (\(\pm\) standard deviation) of our PathNNs and the baselines on the datasets from the TUDataset collection. Best performance is highlighted in **bold**. OOM means out-of-memory and NA means not available. IMDB-M datasets, our PathNNs offer respective absolute improvements of \(3.6\%\), \(2.3\%\) and \(1.7\%\) in accuracy over the best competitor. Overall, our results indicate that PathNNs achieve high levels of performance on the TUDatasets. We next evaluate the proposed model on the two datasets that require long-range interaction reasoning to achieve strong performance. The results are shown in Table 3. We can see that the variants of our PathNNs outperform the baselines on both datasets. On Peptides-Functional, our model offers a respective absolute improvement of \(8.86\%\) in average precision over GCN, while on Peptides-Structural, it offers a respective absolute improvement of \(8.80\%\) in mean absolute error over GatedGCN. To summarize, our results suggest that PathNNs can better capture long-range interactions between nodes than the baseline models. Table 4 shows the ROC-AUC of the different methods on the ogbg-molhiv dataset. We observe that PathNN is ranked as the third best model on this dataset. Among the three variants of the proposed model, PathNN-\(\mathcal{SP}^{+}\) achieves the highest ROC-AUC score. The other two variants perform slightly worse than PathNN-\(\mathcal{SP}^{+}\). This experiment validates the effectiveness of the proposed model on large graph classification datasets. Finally, we evaluate the proposed model in a graph regression task on ZINC\(12\)K in Table 5. The three PathNN variants outperform most of the baselines on this dataset, despite, some of the baseline models, such as 3WLGNN, ESAN and GNNML3, being very expressive models considered to be state-of-the-art for many graph learning problems. With regards to the three PathNN variants, PathNN-\(\mathcal{AP}\) performs best, but all three models exhibit promising performance. ## 5 Conclusion We presented the PathNN model that aggregates path representations to generate node representations. We proposed three different variants that focus on single shortest paths, all shortest paths and all simple paths of length up to \(K\). We have shown some of our PathNNs to be strictly more powerful than the \(1\)-WL algorithm. Experimental results confirm our theoretical results. The different PathNN variants were also evaluated on graph classification and graph regression tasks. In most cases, our PathNNs outperform the baselines. ## Acknowledgements G.N. is supported by ANR via the AML-HELAS (ANR-19-CHIA-0020) project and by the funding Ecole Universitaire de Recherche (EUR) Bertip, plan France 2030. \begin{table} \begin{tabular}{l|c} \hline \hline & **ogbg-molhiv \(\uparrow\)** \\ \hline GCN (Kipf and Welling, 2017) & 76.06 \(\pm\) 0.97 \\ GIN (Xu et al., 2019) & 75.58 \(\pm\) 1.40 \\ GCN-FLAG (Kong et al., 2020) & 76.83 \(\pm\) 1.02 \\ GIN+FLAG (Kong et al., 2020) & 76.54 \(\pm\) 1.14 \\ GSN (Bourikans et al., 2022) & 77.99 \(\pm\) 1.00 \\ HIMP (Fey et al., 2020) & 78.80 \(\pm\) 0.82 \\ PNA (Corso et al., 2020) & 79.05 \(\pm\) 1.32 \\ DGN (Besini et al., 2021) & 79.70 \(\pm\) 0.97 \\ Graphcym (Ying et al., 2021) & 80.51 \(\pm\) 0.53 \\ CIN (Bodnar et al., 2021a) & **80.94**\(\pm\) 0.57 \\ ESAN (Bevilacqua et al., 2022) & 78.00 \(\pm\) 1.42 \\ E-SWN (Abboud et al., 2022) & 77.10 \(\pm\) 1.20 \\ GRWNN (Nikocntezos and Vizingiannis, 2023) & 78.38 \(\pm\) 0.99 \\ AgentNet (Marinkus et al., 2023) & 78.33 \(\pm\) 0.69 \\ \hline PathNN-\(\mathcal{SP}\)\((K=2)\) & 78.62 \(\pm\) 1.30 \\ PathNN-\(\mathcal{SP}^{+}\)\((K=2)\) & 79.17 \(\pm\) 1.09 \\ PathNN-\(\mathcal{AP}\)\((K=2)\) & 78.84 \(\pm\) 1.46 \\ \hline \hline \end{tabular} \end{table} Table 4: ROC-AUC score (\(\pm\) standard deviation) of the different methods on the ogbg-molhiv dataset. Results are averaged over \(10\) random seeds. Best performance is highlighted in **bold**. \begin{table} \begin{tabular}{l|c|c|c} \hline \hline & \(K\) & **Peptides-Functional \(\uparrow\)** & \(K\) **Peptides-Structural \(\downarrow\)** \\ \hline GCN (Kipf and Welling, 2017) & 5 & 93.06 \(\pm\) 0.023 & 5 & 0.3496 \(\pm\) 0.0013 \\ GIN (Xu et al., 2019) & 5 & 54.98 \(\pm\) 0.79 & 5 & 0.3547 \(\pm\) 0.0045 \\ GatedGCN (Kipf and Welling, 2017) & 5 & \(86.64\) \(\pm\) 0.77 & 5 & 0.3420 \(\pm\) 0.0013 \\ (Breison and Laurent, 2017) & 5 & \(86.16\) \(\pm\) 0.26 & 4 & 0.2545 \(\pm\) 0.0032 \\ PathNN-\(\mathcal{SP}^{+}\) & 8 & \(67.84\) \(\pm\) 0.52 & 4 & **0.2540**\(\pm\) 0.0046 \\ PathNN-\(\mathcal{AP}\) & 7 & \(68.07\) \(\pm\) 0.72 & 4 & 0.2569 \(\pm\) 0.0030 \\ \hline \hline \end{tabular} \end{table} Table 3: Results on the Peptides-Functional and Peptides-Structural datasets (\(\pm\) standard deviation). Evaluation metrics are Average Precision and Mean Absolute Error, respectively. Best performance is highlighted in **bold**. Parameter budget is set to \(500\)K parameters. Results are averaged over \(4\) random seeds. \begin{table} \begin{tabular}{l|c} \hline \hline & \(K\) & **ZINC12K \(\downarrow\)** \\ \hline GCN (Kipf and Welling, 2017) & 16 & 0.278 \(\pm\) 0.003 \\ GraphSAGE (Hamilton et al., 2017) & 16 & 0.398 \(\pm\) 0.002 \\ Moket (Monti et al., 2017) & 16 & 0.292 \(\pm\) 0.006 \\ GAT (Velikovic et al., 2018) & 16 & 0.384 \(\pm\) 0.007 \\ GIN (Xu et al., 2019) & 5 & 0.387 \(\pm\) 0.015 \\ GatedGCN (Bresson and Laurent, 2017) & 4 & 0.435 \(\pm\) 0.011 \\ GatedGCN-E (Bresson and Laurent, 2017) & 4 & 0.282 \(\pm\) 0.015 \\ RingCNN (Chen et al., 2019) & 2 & 0.353 \(\pm\) 0.019 \\ 3WLGNN (Maron et al., 2019a) & 3 & 0.407 \(\pm\) 0.028 \\ 3WLGNN-E (Maron et al., 2019a) & 3 & 0.256 \(\pm\) 0.054 \\ GNNML3 (Balcilar et al., 2021) & NA & 0.161 \(\pm\) 0.006 \\ Graphcym (Ying et al., 2012) & NA & 0.122 \(\pm\) 0.006 \\ CIN (Bodnar et al., 2021a) & NA & **0.079**\(\pm\) 0.006 \\ ESAN (Bevilacqua et al., 2022) & NA & 0.102 \(\pm\) 0.003 \\ KP-GIN (Feng et al., 2022) & NA & 0.093 \(\pm\) 0.007 \\ AgentNet (Marinkus et al., 2023) & NA & 0.258 \(\pm\) 0.033 \\ \hline PathNN-\(\mathcal{SP}\) & 4 & 0.104 \(\pm\) 0.002 \\ PathNN-\(\mathcal{SP}^{+}\) & 4 & 0.131 \(\pm\) 0.008 \\ PathNN-\(\mathcal{AP}\) & 4 & 0.090 \(\pm\) 0.004 \\ \hline \hline \end{tabular} \end{table} Table 5: Mean absolute error (\(\pm\) standard deviation) of the different methods on the ZINC\(12\)K datasets. Results are averaged over \(10\) random seeds. Best performance is highlighted in **bold**. Parameter budget is set to \(500\)K parameters.
2308.03888
Deep neural networks from the perspective of ergodic theory
The design of deep neural networks remains somewhat of an art rather than precise science. By tentatively adopting ergodic theory considerations on top of viewing the network as the time evolution of a dynamical system, with each layer corresponding to a temporal instance, we show that some rules of thumb, which might otherwise appear mysterious, can be attributed heuristics.
Fan Zhang
2023-08-04T10:55:56Z
http://arxiv.org/abs/2308.03888v1
# Deep neural networks from the perspective of ergodic theory ###### Abstract The design of deep neural networks remains somewhat of an art rather than precise science. By tentatively adopting ergodic theory considerations on top of viewing the network as the time evolution of a dynamical system, with each layer corresponding to a temporal instance, we show that some rules of thumb, which might otherwise appear mysterious, can be attributed heuristics. ## I Introduction and motivation Artificial neural networks have demonstrated great potential in their ability to learn existing knowledge, and interpolate or even slightly extrapolate to new situations. They however lack the ability to understand causations and other logical relations that indicate general intelligence. No matter, much of human activities are experience-based, and so the present incarnation of artificial intelligence algorithms are sufficient to revolutionize society. In particular, one can highlight medicine as one area where vast amounts experiential enigmas persist, and a doctor's effectiveness is largely restricted by their capacity to learn rather than the ability to understand. Furthermore, human bodies are complex systems whereby heterogeneity creates self-organization, which an artificial neural network, as also a complex dynamical system, may be able to (intentionally or unintentionally) emulate. Thereby, a network's learning power may well be tunable to arise out of emergent phenomena that share traits with the processes underlying human body functions. In this way, a network could serve as a crude simulation, and subsequently stands a better chance at accurately and efficiently learning and storing medical knowledge, providing better interpolations and extrapolations than the usually naive and linear manner with which humans try to carry out such tasks. Despite such great potential and practical successes, a theory of deep learning is still lacking however, so we are in want of general guidelines as to what architecture might perform well. In such a pursuit, it is beneficial to be able to examine deep neural networks from as many perspectives as possible, thereby acquiring a more complete picture. In this brief note, we advocate also adopting the ergodic theory approach by showing how it could offer simple intuitive (although arguably hand-wavy at its present state of deployment) explanations to some properties that we have observed in the behavior of the deep neural networks. We begin by relating some understood aspects of the networks to ergodicity concepts, which could then serve as the conduits connecting the two disciplines. ### As fitting functions to training data We are just now beginning to understand some aspects of the behavior of deep neural networks. For example, one's experiences with lower dimensional non-convex optimization problems might suggest that there would be an enormous number of local minima, on the cost function surface in parameter space, that trap our optimization procedure and keep it away from the optimal solution. This is in fact not a problem [1], because the local minima are replaced with saddle points in very high dimensions. The Hessian at critical points have many eigenvalues in high parameter dimensions, and it is more likely that they take on both positive and negative values giving us saddle points, so we are less likely to be trapped, and could instead slip out along the downwardly curving directions. But more fundamental than not being trapped, is that a good-enough solution is at all on that surface to begin with. In other words, \(\mathcal{C}_{1}:\)_the network needs to be sufficiently flexible so the surface extends a large range of cost values._ Another aspect of neural networks that people had expected to be a problem is statistical sample complexity, i.e., having too many parameters, even more than the training data sets, should lead to overfitting, so while fitting to the (possibly noisy) training data may be perfect, interpolation, let alone extrapolation, would not work as the overfitted function may oscillate wildly in-between the training data points on which they are pinned. This behavior is not observed in reality. One explanation is proposed by [2], which rigorously proved that having many parameters being unimportant for the fitting quality is key to the effectiveness of overfitted linear regression. An intuitive understanding being offered in literature (see e.g., [3]) is that there are then many different solutions that work rather well, differing only in the unimportant parameters, thus it becomes much easier to find them. However, we note that for any fitting problem, one could always introduce completely spurious parameters that don't appear in the actual fitting procedure (we just vacuously append them to the formal parameter set to be fitted), thus are unimportant to the extreme, but this should not change at all the quality of output of that fitting procedure. So this explanation would make sense only if the original procedure is not overfitting to begin with, or in other words, so many of the fitting parameters in the linear regression are unimportant or spurious that the surviving important ones are so few in numbers that there is no overfitting in reality. This is thus a rather trivial explanation - the overfitting problem is not problematic if for the problem at hand the fitting architecture is nothing but superficially over-parameterized. This does not appear to be the case for deep neural networks though, for changing the network structure, especially the number of neurons, appears to alter the fitting quality. Therefore, an alternative explanation for why overfitting does not present a catastrophic snag may be needed. To this end, we first note that condition \(\mathcal{C}^{1}\) already helps, as overfitting often occurs in regression problems when the fitting curve is just sufficiently flexible to hit all the training data points if we strong-arm it, but not enough to do so smoothly if noises are present or the choice of basis functions are not appropriate. This rigidity means the fitting curves have to take a large detour in order to make the correct directional changes in order to hit the next train ing point needing to be fitted, much like how a fast aircraft needs to make a very large divagation in order to make a turn. When we do interpolation or extrapolation, we have to sample from within those large detours and thus yield terrible results. When the parameter number goes much larger than training point population however, the curve becomes extremely flexible, so it could possibly effortlessly (smoothly) transit between training data points, without having to take up some very contorted shape that shoots off to large extremes in the intervening values. This is the case with deep neural networks that are complex and flexible (see e.g., [4; 5] for neutral networks as universal approximators). This is not enough though. If the fitting quality to the training data is the only criteria in the cost function, then there is in principle no guarantee that the flexible fitting curve won't still whip around like crazy. That is, the smooth curves exist, but are not necessarily the ones we get if we do not deliberately look for them. To cure this, one typically introduces these so-called regularizers into the cost functions to penalize undesired parameters that could cause instabilities of the output, as one rather contrived approach to achieve \(\mathcal{C}_{2}:\)_the network needs to be sufficiently wellposed so the output doesn't depend so extremely sensitively on initial data that infinitesimally separated initial data points leads to wildly diverging outcomes._ ### As dynamical systems The conditions \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) are qualitative and difficult to translate into some enforceable quantitative criteria on network design. To make further progress, we enlist the dynamical systems view of neural networks (see e.g., [6]), whereby each layer of neurons corresponds to a time instance of a dynamical system, so updates propagating through the network according to some given initial data behaves like a time evolution of the system within a state space having the same dimension as the number of neurons in each layer, where the number of layers becomes the number of discrete1 time steps. This dynamical systems perspective had already yielded important insight, e.g, the Lemma 1 of [8], which is a well-established result applicable to the numerical methods for solving differential equation, essentially imposes a necessary condition for \(\mathcal{C}_{2}\). On the other hand, the same paper also recognizes the importance for the forward propagation to not be too lossy (i.e., the Lyapunov exponents cannot all be too negative, or the state space volume shrinks very quickly, losing the ability to distinguish information in the initial data), essentially offering a caution on what might definitely violate \(\mathcal{C}_{1}\). Footnote 1: Previous literature would typically go to the continuous limit and adopt differential equation results in order to glean some insight into the behavior and properties of the neural networks (see in particular [7], where some criteria for neural ODEs to develop chaotic behavior is worked out). This strategy is not suitable for us however, since the chaotic nature or lack thereof, differs quite drastically for discrete and continuous systems, not least because the requirement for no-self-intersection or uniqueness of trajectories is so very much relaxed for discrete evolutions. The (spatial) differentiability also effectively restricts \(K_{\alpha}^{[1]\beta}\) of Eq. (3) below to tridiagonal or other nearly diagonal (depending on the order of derivatives and the finite difference scheme adopted) forms that limit the dependence of a neuron’s time evolution to only its nearest neighbours. In this note, we further enlist the powerful mathematical tool of ergodic theory2 to help tighten the discussion, inching a little more towards necessary and sufficient conditions. In this language, \(\mathcal{C}_{1}\) translates to a desirability to have ergodicity3, while \(\mathcal{C}_{2}\) means we should avoid mixing4. This preference to wedge in-between ergodicity and mixing is easiest to understand in regard to classification tasks, where we need ergodicity to be able to move any initial point almost anywhere else in state space to achieve segregation (see e.g., Fig. 3 in [6]), but not mixing that causes neighbourhoods to all get mangled together that there is no segregation possible, and classification ends up having to be done point-wise while interpolation becomes impossible. Footnote 2: The usual ergodic theory studies measure-invariant or state space volume preserving evolutions, or in other words, there is no dissipation in the system, which may not be the case for deep neural networks. But dissipation may not matter directly, because the sum of _all_ Lyapunov exponents determines whether we have dissipation, but chaos (\(\exists\) at least one positive exponent), ergodicity, weakly-mixing (only constant functions are the eigenfunctions corresponding to the exponents as eigenvalues) and entropy (sum of all positive exponents) are really more about the positive ones. Also, as discussed in the main text in the last paragraph, a well-designed neural network should not be too dissipative. Footnote 3: No invariant subsets that trap orbits thus prevent effective migration in e.g., the classification problem. Beginning from any set of initial data, the bundle of trajectories will eventually cover almost all of the allowed state space (besides perhaps sets of zero measure, these don’t matter for statistical considerations, but if we are aiming to train the neural network to search for very special properties that nearly never occurs in a big data set, these will be relevant and so neural networks cannot produce master pieces of exceptional qualities, they are however efficient at charming out mediocircity). We can see this from the celebrated Birkhoff’s ergodicity theorem that underlies statistical mechanics, which states that spatial average equals temporal average along the dynamical evolution, so any macrostate with non-vanishing measure must eventually be reached. Footnote 4: Stronger than ergodicity, requiring that memory of the initial data be completely lost as we take many steps, in the sense that the trajectories emerging from any initial data set will have to spread out over the entire state space completely randomly according to the probability distribution of the overall state space, with conditional probability conditioning on the initial data adding no further information. For mixing systems, trajectories from any two initially disjoint initial data sets get meshed together thoroughly and permanently after sufficiently long evolution. Note mere ergodicity could also mesh two bundles of trajectories out of two disjoint sets of initial data, but the meshing can be transitory, meaning the two bundles of trajectories can separate again, and then repeat in the meshing and separating cycle (cf., the claim that merely ergodic systems are not even apparently random) [9]. With mixing though, the meshing is permanent with no subsequent re-separation. In a way, one can visualize the two bundles as meeting (if they meet at all) transversely over and over again with an ergodic-but-not-mixing system, but they intersect and merge tangentially with a mixing system. We will hereafter refer to this dedicate balanced wildness of the dynamical system's orbits as being on the edge of chaos, taking the view that strong mixing is a hallmark for chaotic systems5[9; 10], while merely ergodic systems are usually not chaotic. We caution that this relationship between chaos and ergodicity concepts is not rigorous, and is but a pragmatism that is useful for practical applications. We will adopt it in this note understanding this lack of rigor. We will similarly be loose with terminologies in the interest of brevity, especially when migrating concepts defined for infinite time system to finite ones, and those defined for measure-preserving systems to more general cases. ## II Network spectroscopy As always with dynamical systems, turning to spectral considerations tends to simplify computations. The relevant quantities for our considerations, in the case of deep neural networks of finite depth, are the finite time Lyapunov exponents, which measure the tendency of the dynamics to locally drive nearby trajectories apart (note the qualifying "finite time" does not mean they are gauges of the cumulative divergence across the entire network depth, they are still "per layer" quantities as the division by \([j]\) in Eq. 5 below shows), and can be seen as the average local Lyapunov exponents across the available depth of the network. Once we have these numbers, the various dynamical system qualities can be assessed, e.g., whether the system is dissipative (volume preserving in the Liouville sense) depends on whether the sum of the exponents is negative, and particularly relevant for us, the system is chaotic and likely practically mixing if there exists at least one positive exponent (becomes hyperchaotic if there are more than one). To this end, we first recast the neural network into the dynamical systems language (see e.g., [11]). For definitiveness, we assume a basic multi-layer perceptron style network. If we see any particular configuration of a layer (labelled by \([j]\in 0,\cdots N-1\)) in the deep neural network as being a point in a state variable space, with each neuron within occupying a dimension (labelled by \(\alpha\)), then the value being carried by that neuron \(y_{i}^{[j]\alpha}\), as corresponding to some input state \(y_{i}^{[0]\beta}\) (\(i\) indexes the training set), gives the \(c\)th coordinate of that point in the state variable space. We can subsequently regard the layers as time steps in an evolution in this state space, where the discrete evolution is written as \[y_{i}^{[j+1]\alpha}= \tilde{f}^{[j]\alpha}\left(y_{i}^{[j]\beta}\right) \tag{1}\] \[= y_{i}^{[j]\alpha}+f^{\alpha}\left(y_{i}^{[j]\beta},\mathbf{u}^{[ j]}\right)\Delta t\,, \tag{2}\] where usually \[f^{\alpha}\left(y_{i}^{[j]\beta},\mathbf{u}^{[j]}\right)=\sigma_{\beta}^{ \alpha}\left(K_{\delta}^{[j]\beta}y_{i}^{[j]\delta}+\xi^{[j]\beta}\right)\,, \tag{3}\] with \(\mathbf{u}^{[j]}\) representing the weights \(K_{\delta}^{[j]\beta}\) and bias \(\xi^{[j]\alpha}\) connecting the \([j]\)th layer to the \([j+1]\)th note these parameters become independent of the input once training is complete). The \(\sigma_{\beta}^{\alpha}\) in Eq. (3) is an activation function for the neurons that's usually of a diagonal form. The expression (1) is a more abstract representation of the dynamical process, which absorbs the layer dependent variations in the parameters \(\mathbf{u}^{[j]}\) into the \([j]\) label of \(\tilde{f}^{[j]\alpha}\). Assuming for simplicity that the width of the layers do not change, then one obtains, for each reference trajectory (e.g., starting from the input of a training set labelled by \(i\)), and an end time \(j\) (usually \(N-1\) but we keep the definition general here; we always fix the initial time at \(j=0\) however), a square matrix \[\mathbb{M}_{i\,\beta}^{[j]\alpha}\equiv\frac{\partial y_{i}^{[j]\alpha}}{ \partial y_{i}^{[0]\beta}}\,. \tag{4}\] This matrix, when acting on a perturbation vector of the initial data \(\delta y_{i}^{[0]\beta}\), yields the leading order changes in the state \(y_{i}^{[j]\alpha}\) at the \([j]\)th layer. Therefore the exponential rate of growth of the perturbation would be related to the logarithm of its eigenvalues, rescaled by the number of time steps \([j]\). However, there is no guarantee that \(\mathbb{M}_{i\,\beta}^{[j]\alpha}\) is diagonalizable or even square in the more general cases, so one instead goes to the singular values \(\mu_{i}^{[j]\alpha}\) of \(\mathbb{M}_{i\,\beta}^{[j]\alpha}\) in its singular value decomposition. The finite time Lyapunov exponents are then defined as6 Footnote 6: The singular values \(\mu_{i}^{[j]\alpha}\) can always be chosen to be positive, since multiplying a negative singular value and the corresponding left singular vector both by minus one will preserve the validity of the decomposition. \[\lambda_{i}^{[j]\alpha}\equiv\frac{\ln\mu_{i}^{[j]\alpha}}{[j]}\,. \tag{5}\] To compute such exponents, we begin with the most general expression (1), which after iterations yield \[y_{i}^{[j]\alpha}=\sigma_{q=0}^{j-1}\tilde{f}^{[q]\alpha}\left(y_{i}^{[0] \beta}\right)\,. \tag{6}\] So when the \(\tilde{f}^{[q]\alpha}\) are all differentiable (because of the need to back-propagate errors in neural networks, its derivatives usually exist, maybe aside from at isolated points like with the ReLU activation function), we can write \[\frac{\partial y_{i}^{[j]\alpha}}{\partial y_{i}^{[0]\beta}}=\delta^{\alpha}{}_ {\gamma_{j}}\delta_{\beta}\!\cdot\!\prod_{q=0}^{j-1}\tilde{f}_{\gamma_{q}}^{[q] \gamma_{q+1}}\,, \tag{7}\] where the local Jacobians are \[\tilde{f}_{\gamma_{q}}^{[q]\gamma_{q+1}}\equiv\frac{\partial\tilde{f}^{[q] \gamma_{q+1}}}{\partial y_{i}^{[q]\gamma_{q}}}\,. \tag{8}\] If each layer is identical so all the Jacobians are the same, then the method of powers for singular value decomposition suggests that \(\mathbb{M}_{i\,\beta}^{[j]\alpha}\) is essentially rank one, dominated by the largest singular value when the neural network is deep. This low rank scenario has minimal expressivity, so it is important that the Jacobians are varied. In this case, there is no simple relationship between the singular values of the individual Jacobians and the final \(\mathbb{M}_{i\,\beta}^{[j]\alpha}\). At best, it is possible to approximately almost-diagonalize \(\mathbb{M}_{i\,\beta}^{[j]\alpha}\) (taking it to a bidiagonal form) with a Householder procedure that turns all the Jacobians upper triangular [12]. Then, the bidiagonal form is further fully diagonalized according to the "chasing the bulge" sweep of e.g. [13], using 2-D rotations that suppress the superdiagonal entries. This entire procedure is rather opaque and blends the entries of the Jacobians in nontrivial manners, therefore, without specializing to specific neural networks, we cannot make definitive statements regarding the finite time Lyapunov exponents. Nevertheless, we could perhaps make some vaguely probabilistic statements on how various aspects of the network architecture (e.g., the number of layers, the width of each layer, the rank of each Jacobian etc) would _likely_ affect the singular values of \(\mathbb{M}_{i\,\beta}^{[j]\alpha}\). ### Effect of network architectural traits #### iii.1.1 Depth of network vs. activation function A major hurdle in applying ergodic theory to deep neural networks is that the concepts of the former are defined in the infinite evolution time limit, or in other words asymptotically, while the actual networks tend to be of finite depth. For the same local spectral characteristics, the ergodicity (or transitivity in the language of topological dynamics) and mixing properties therefore manifest more fully in deeper networks. As a result, greater depth is desirable for networks whose individual layers don't tend to push nearby trajectories apart, i.e., for dynamics that are on the regular side, meaning barely ergodic and far from mixing. On the other hand, for networks whose individual-layer-driven local dynamics have a strong tendency to cause neighboring trajectories to diverge, or in other words, a system that is highly chaotic and deeply in the mixing regime asymptotically, shallower networks are beneficial, as transitivity may have had time to transpire while mixing hasn't (viewing these concepts through the lens of losing dependence on initial data, then mixing is a more complete amnesia that tends to happen further into the evolution than ergodicity, which is only partial loss of memory). With the prevailing network designs, because the activation functions (some of which are really just distributions) are more or less binary (switching between active or dormant states) by definition, the outputs of each neuron for nearby trajectories that happen to lay on either side of the threshold tend to be very different7. So we usually have the latter case, thus shallower networks may often be desirable (see e.g., [14]). However, the extent to which this is true is dependent on the activation functions, which is supposed to bring in nonlinearity, thus should naturally relate to how chaotic the network evolution is. More specifically: Footnote 7: This variability against initial data is a signature of the nonlinearity introduced by the activation functions. A truly linear activation function will lead to a constant \(\mathbb{M}\) matrix, and the entire network becomes a linear regression, which may not be able to fit the training data (e.g., when two inputs in the training set are related by a simple \(1/2\) rescaling, but their corresponding outputs differ not by the same rescaling). On an intuitive level, such variability across finite and not infinitesimal shifts in initial data may also be conducive to generate the ergodic-but-not-mixing behavior discussed in footnote 4, because different bundles of trajectories experiencing different local Jacobians tends to drive the bundles, but not necessarily the individual trajectories within each bundle, to move apart. * One category consists of the binary step, sigmoid, or tanh activation functions, which all have small derivatives far from the transition region, but a large derivative within. As a result, their Jacobians contain a delta-function style spike that, at occasions, contributes to a very large Frobenius norm to \(\bar{J}\) (we henceforth suppress unimportant indices for brevity) and subsequently \(\mathbb{M}\) (sans chance cancellations). Because the Frobenius norm of \(\mathbb{M}\) is the L2 norm of its singular values, this then implies large singular values, and thus large (more positive) finite time Lyapunov exponents, and consequently deeper protrusion into the mixing regime. In summary, if one would like to adopt these \(S\)-shaped activation functions, shallower networks are needed if one sets the transition region in these functions to be very narrow. * The recently more popular ReLU, ELU or swish activation functions behave very differently, as there is not spike, but only a (actual or almost) discontinuity in the Jacobians, so there is no large Frobenius norm at individual \(\bar{J}\) level, just jumps in their entries when the state of the relevant layer changes. Once the local Jacobians multiply into the overall \(\mathbb{M}\), the entries in that matrix end up jumping frequently (multiplication of many Heaviside functions jumping at different values). Therefore, when we scan across all possibilities (varying \(i\)), the \(\mathbb{M}_{i}\) matrix changes often and \(\lambda_{i}\) tends to explore large ranges, thus have a high probability to hitting large positive values. This effect is likely less pronounced than that of the previous item where large \(\lambda_{i}\) values are more definitively hit (one can alternatively think of these call-option-payoff shaped functions as being less binary so nearby trajectories don't elicit very different activation function outcomes, implying less chaotic propagations), so we predict that ReLU family functions would be more suitable for networks of greater depth. #### iii.1.2 Width of layers vs. connectivity The width \(D\) of the layers of a neural network (number of neurons in each layer, assuming to be a constant for the present discussion for brevity) corresponds to the dimension of the state space of the dynamical system, and \(\mathbb{M}\) is a \(D\times D\) matrix. How the finite time Lyapunov exponents vary with \(D\) is heavily dependent on how the connection matrix \(K_{\beta}^{\alpha}\) in Eq. (3), between neurons in adjacent layers, changes when we scale up the state space dimension: * First consider increasing \(D\) without changing the connectivity between the layers (i.e., the percentage of neurons in the next layer that a particular neuron is connected to, or in other words the percentage of non-vanishing entries in each row of \(K_{\beta}^{\alpha}\)) or the coupling strength (the typical size of the entries in the weighting matrix \(K\)), then the sparsity of \(\mathbb{M}\) and the typical amplitude of entries in it won't change, resulting in the Frobenius norm scaling as \(D^{1}\) since the number of these entries grow as \(D^{2}\). The number of singular values on the other hand only grows as \(D^{1}\), so the average singular value would have to scale as \(\propto\sqrt{D}\). In other words, without changing other features like connectivity and weight ranges etc, a wider neural network tends to possess more positive finite time Lyapunov exponents, thus more likely to stray into the undesirable deep mixing regime. Therefore, even without heeding limitations on computational resources, wider networks should be made shallower. * When we renormalize the coupling strengths so the weighting matrix becomes more like a probability with weights summing up to a constant (e.g., \(\sum_{\alpha}K_{\beta}^{\alpha}=1\)), the situation changes. The Frobenius norm now scales as \(D^{0}\), and the average singular value now must scale as \(1/\sqrt{D}\), so the dynamics become less chaotic as the width of the layers increases. With this approach, one could thus simultaneous increase both the width and the depth of the network, without degrading performance. Although there doesn't seem to be a strong incentive to doing so, given the higher drain on computational resources. * Another way to curtail chaos while increasing \(D\) is to make the neurons in the wider network more sparsely connected (setting a higher proportion of entries in each row of \(K\) to zero). A particularly interesting physical interpretation that is relevant for a subset of such a strategy is related to path dependence, which by definition has a tendency to help retain reliance on initial data, thereby less mixing. Given a dynamical system, path dependence can be folded into a particular type of higher dimensionality, with many new auxiliary state variables that only depend on one other variable, since they are supposed to just passively record the past states of that variable and carry them forward without modifications. For example, if \(y^{[j+1]\alpha}\) depend not only on \(y^{[j]\beta}\) but also on \(y^{[j-1]\gamma}\), then we can define an additional set of variables \(x^{[j]\beta}\) and simply let them be updated by \(x^{[j]\alpha}=y^{[j-1]\alpha}\) (note each \(x\) node only depends on one \(y\) node in the previous step). This way, the \(j+1\)th step of the \(2D\)-dimensional \(y\oplus x\) combined system depends only on its step \(j\) state, and path dependence is formally removed so a dimensionally expanded version of Eq. (1) remains valid. Mapping such a dynamical system into a neural network, we see that the width of the network can be increased in order to simulate the effect of path dependence, so long as the newly added dimensions don't link up with too many of the existing ones. This could explain why pruning networks sometimes helps when the network is otherwise mixing (suffers from e.g., overfitting problems). ## III Conclusion In this brief note, we advocated for the enlistment of ergodic theory to help intuit behaviors of deep neural networks. In particular, we argued that a highly effective deep neural network would likely operate on the edge of chaos. This is the easiest to intuit in the case of a classification problem. The corresponding dynamical system evolution of the neural network rearranges the data into orderly and well-segregated tiles (the specifics of the tiling depends on the hypothesis function choice), each representing a class, and so given any input data to be processed, the output will land in one of them. For this to work, we need the flow to be sufficiently flexible to contort any _contiguous_ (the problem needs to be interpolatable for training to be useful) but possibly exotic-looking initial shape representing a class in the initial data space, into a regular simply hypercubic tile (assuming the simplest threshold-based hypothesis function). This means we would need (quasi-)ergodicity, so any initial data point within that shape can eventually arrive at the desired tile site (tiles are larger than infinitesimal neighbourhoods of points, thus strictly speaking we don't need full ergodicity, and subsequently the prefix "quasi"). On the other hand, we must also avoid the more strongly chaotic mixing behavior, otherwise the class shape being convected by the flow will be shredded and thoroughly mixed up with sibling shapes, making clean well-segregated tiles in output space impossible. In other words, we need moderation in the wildness of the dynamical system trajectories, and just-on-the-cusp of chaotic regime seems ideal. We then discussed some network architectural traits that may be advantageous in terms of finding such ergodic-but-not-mixing niche. For our generic deliberation, we have stayed with qualitative properties and intuitive guidelines. However, after the implementation and training of an actual neural network, it should not be prohibitively difficult to compute the actual numerical values of the finite time Lyapunov exponents, as there exists mature and efficient numerical routines for computing singular values. The largest values of these could then serve as a quality control indicator that informs on whether the result of the training is suitable for interpolation and extrapolation. This is potentially an important application, as the tunable parameters of a neural network are so very numerous, giving it the ability to always fit to any training data we present to it, but real life applications require the trained network to respond to new situations in a moderate and controlled manner, rather than jerks around wildly (i.e., we want to avoid the overfitting problem). Previous approaches to ensuring that this is the case is largely empirical, by testing the trained network against additional datasets not included in the training stage. This wastes precious labelled data, and one can never be sure these tests properly cover all plausible new situations. The finite time Lyapunov exponents and their ergodic theory significances could possible provide a valuable alternative. One could of course also compute the finite time Lyapunov exponents for the purpose of debugging. For example, if it turns out that the trained network lacks expressivity, then one can imagine that the problem might be that the sum \(\sum_{\alpha}\lambda_{i}^{[N-1]\alpha}\) is too negative, so the dynamics ends up being highly dissipative and collapses onto a fixed point or an attractor, occupying only a corner of the state space, thereby preventing the network, as an approximator function, from taking up a chunk of the codomain. ###### Acknowledgements. This work is supported by the National Natural Science Foundation of China grants 12073005, 12021003.
2303.16113
Graph Neural Networks for Power Allocation in Wireless Networks with Full Duplex Nodes
Due to mutual interference between users, power allocation problems in wireless networks are often non-convex and computationally challenging. Graph neural networks (GNNs) have recently emerged as a promising approach to tackling these problems and an approach that exploits the underlying topology of wireless networks. In this paper, we propose a novel graph representation method for wireless networks that include full-duplex (FD) nodes. We then design a corresponding FD Graph Neural Network (F-GNN) with the aim of allocating transmit powers to maximise the network throughput. Our results show that our F-GNN achieves state-of-art performance with significantly less computation time. Besides, F-GNN offers an excellent trade-off between performance and complexity compared to classical approaches. We further refine this trade-off by introducing a distance-based threshold for inclusion or exclusion of edges in the network. We show that an appropriately chosen threshold reduces required training time by roughly 20% with a relatively minor loss in performance.
Lili Chen, Jingge Zhu, Jamie Evans
2023-03-27T10:59:09Z
http://arxiv.org/abs/2303.16113v2
# Graph Neural Networks for Power Allocation in Wireless Networks with Full Duplex Nodes ###### Abstract Due to mutual interference between users, power allocation problems in wireless networks are often non-convex and computationally challenging. Graph neural networks (GNNs) have recently emerged as a promising approach to tackling these problems and an approach that exploits the underlying topology of wireless networks. In this paper, we propose a novel graph representation method for wireless networks that include full-duplex (FD) nodes. We then design a corresponding FD Graph Neural Network (F-GNN) with the aim of allocating transmit powers to maximise the network throughput. Our results show that our F-GNN achieves state-of-art performance with significantly less computation time. Besides, F-GNN offers an excellent trade-off between performance and complexity compared to classical approaches. We further refine this trade-off by introducing a distance-based threshold for inclusion or exclusion of edges in the network. We show that an appropriately chosen threshold reduces required training time by roughly \(20\%\) with a relatively minor loss in performance. Power allocation, Graph neural network, Full-duplex transmission, Wireless network ## I Introduction Power allocation is crucial to the performance of wireless communications networks, especially under time-varying channel conditions. However, power allocation is often a non-convex problem due to the interference between channels. For the sum rate maximisation problem, a number of classical approaches have been proposed in the literature (see for example [1, 2]). Nevertheless, they are computationally intensive in large wireless networks and thus inappropriate for practical implementation [3]. More recently, there has been significant interest in deep learning-based approaches for solving power allocation problems. For example, multi-layer perceptrons (MLPs), which were inherited from image identification tasks, are now widely used for power allocation in wireless communication [3]. However, as the network size increases, the performance of these algorithms degrades dramatically [4]. On the one hand, it is computationally expensive to train MLPs with high-dimensional data. On the other hand, this architecture may fail to exploit the graph structure of wireless networks. To remedy these drawbacks, several researchers have applied graph neural networks (GNNs) to the power allocation problem due to their ability to exploit the network structure. Besides, some researchers also prove that GNNs may perform better than MLPs when it comes to graph-structured data [5]. Message-passing graph neural networks (MPGNNs) are proposed in [4] to find the optimal power allocation with unsupervised learning. Comprehensive simulations demonstrate that the MPGNNs have a similar performance to the weighted sum MSE minimization (WMMSE) algorithm [2] with less computational complexity and the model has good generalisation capacities. Heterogeneous GNNs with a novel graph representation are proposed in [6] and [7] to allocate power in device-to-device (D2D) and cell-free massive Multiple-Input-Multiple-Output (MIMO) networks, respectively. In most cases, the power allocation problem is formulated in the context of half-duplex (HD) transmission. However, as a promising technique to provide the potential of doubling the capacity compared to conventional HD transmission [8], full-duplex (FD) transmission has drawn much attention recently. To the best of our knowledge, current GNN-based methods in power allocation only focus on HD transmission, and only a few papers focus on FD transmission such as [7]. However, the authors preprocess the node feature to guarantee the scalability of GNN, which introduces additional non-negligible computational time. To address this issue, we propose a novel graph representation method for FD transmission in wireless networks. The node feature can be directly fed into GNN without loss of scalability. Then, we devise the corresponding full-duplex Graph Neural Network (F-GNN) to optimising the power allocation in D2D networks with FD nodes. We conduct extensive experiments to evaluate the effectiveness of the proposed graph representation and F-GNN. Simulation results show that F-GNN slightly outperforms the advanced optimisation-based WMMSE algorithm with much lower time complexity. Intuitively, unpaired transmitters and receivers in D2D networks are more likely to be far apart. Since the interference will decay with distance, we expect that removing edges between nodes that are distant, should further reduce the computational complexity without impacting the performance significantly. In [4], the authors assume the channel state is used in GNNs when the distance is within a specific threshold to reduce the training overload. Similarly, the authors in [9] apply the distance-based threshold to alleviate the negligible interference between unpaired transmitters and receivers. However, the threshold is usually randomly chosen without further justification. In this paper, we investigate the impact of distance-based thresholds on both performance and time complexity. Lastly, we provide with an expression for time complexity in terms of the threshold. ## II Preliminaries ### _System Model_ We consider the power allocation problem in a single-hop D2D communication network with \(K\) users, where multiple single-antenna devices share the same spectrum. Here, we assume the first \(T_{1}\) devices can use the wireless FD transmission with different frequency bands, while the other devices are on HD transmission. We denote the index set for FD communication devices by \(\mathcal{T}_{1}=\{1,2,...,T_{1}\}\) and the index set for HD communication devices as transmitter by \(\mathcal{T}_{2}=\{T_{1}+1,T_{1}+2,...,T_{1}+T_{2}\}\). For each \(k\) in \(\mathcal{T}_{1}\) or \(\mathcal{T}_{2}\), we define \(I(k)\) to be the index of the intended receiver. The received signal at \(I(k)\)-th receiver for any \(k\in\mathcal{T}_{1}\) with FD transmission is given by \[y_{I(k)}=h_{k,I(k)}s_{k}+\sum_{j\in\mathcal{T}_{1}\cup\mathcal{T}_{2}\setminus \{k\}}h_{j,I(k)}s_{j}+n_{I(k)}+z_{I(k)},k\in\mathcal{T}_{1}, \tag{1}\] where \(h_{k,I(k)}\in\mathbb{C}\) represents the communication channel between \(k\)-th transmitter and its intended \(I(k)\)-th receiver, \(h_{j,I(k)}\in\mathbb{C}\) represents the interference channel between \(j\)-th transmitter and \(I(k)\)-th receiver. We denote \(s_{k}\in\mathbb{C}\) as the data symbol for the \(k\)-th transmitter, \(n_{I(k)}\sim\mathcal{CN}\left(0,\sigma_{I(k)}^{2}\right)\) and \(z_{I(k)}\sim\mathcal{CN}\left(0,\gamma_{I(k)}^{2}\right)\) as the additive Gaussian noise and self-interference for the \(I(k)\)-th receiver, respectively. The received signal at \(I(k)\)-th receiver for any \(k\in\mathcal{T}_{2}\) with HD transmission is given by \[y_{I(k)}=h_{k,I(k)}s_{k}+\sum_{j\in\mathcal{T}_{1}\cup\mathcal{T}_{2}\setminus \{k\}}h_{j,I(k)}s_{j}+n_{I(k)},k\in\mathcal{T}_{2}, \tag{2}\] here, we assume each transmitter either in FD or HD modes only has one intended receiver. The signal-to-interference-plus-noise ratio (SINR) for the \(I(k)\)-th receiver with FD transmission is given by \[\mathrm{SINR}_{I(k)}=\frac{\left|h_{k,I(k)}\right|^{2}p_{k}}{\sum_{j\in \mathcal{T}_{1}\cup\mathcal{T}_{2}\setminus\{k\}}\left|h_{j,I(k)}\right|^{2}p _{j}+\sigma_{I(k)}^{2}+\gamma_{I(k)}^{2}},k\in\mathcal{T}_{1}, \tag{3}\] where \(p_{k}\!=\!\mathbb{E}[s_{k}^{2}]\) is the power of the \(k\)-th transmitter, and we have the constraints \(0\!\leq\!p_{k}\!\leq\!P_{max}\), where \(P_{\max}\) is the maximum power constraint for transmitters. The SINR for \(I(k)\)-th receiver with HD transmission is given by \[\mathrm{SINR}_{I(k)}\!=\!\frac{\big{|}h_{k,I(k)}\big{|}^{2}p_{k}}{\sum_{j\in \mathcal{T}_{1}\cup\mathcal{T}_{2}\backslash\{k\}}\big{|}h_{j,I(k)}\big{|}^{2 }p_{j}\!+\!\sigma_{I(k)}^{2}}\!,\!k\!\in\!\mathcal{T}_{2}, \tag{4}\] We denote \(\mathbf{p}\!=\![p_{1},\)\(\cdots,\)\(p_{K}]\) as the power allocation vector. For a given power allocation vector \(\mathbf{p}\) and channel information \(\{h_{ij}\}_{i\in\mathcal{T}_{1}\cup\mathcal{T}_{2},j\in I(i)}\), the achievable rate \(\mathcal{R}_{I(k)}\) of the \(I(k)\)-th receiver is given by \[\mathcal{R}_{I(k)}(\mathbf{p})\!=\!\log_{2}\!\big{(}1\!+\!\mathrm{SINR}_{I(k )}\big{)},\quad k\!\in\!\mathcal{T}_{1}\cup\mathcal{T}_{2}. \tag{5}\] We assume that the channel coefficients are fixed in each time slot, thus the output for the power allocation algorithm is \(\mathbf{p}\) which is based on the channel information. The objective is to maximise the performance under maximum power constraints, which is formulated as, \[\begin{array}{ll}\underset{\mathbf{p}}{\mathrm{maximise}}&\sum_{k}w_{k} \mathcal{R}_{I(k)}(\mathbf{p}),\\ \text{subject to}&0\!\leq\!p_{k}\!\leq\!P_{\max}\!,\!\forall k\!\in\!\mathcal{T} _{1}\cup\mathcal{T}_{2},\end{array} \tag{6}\] where \(w_{k}\) is the weight for the \(k\)-th transmitter. ### _Self-interference Model_ It is impossible to completely eliminate self-interference in practical situations, despite the advanced self-interference mitigation techniques applied. Therefore, we assume the self-interference is zero-mean Gaussian distribution and model the variance of it caused by imperfect mitigation as [10], \[\gamma^{2}\!=\!\eta p^{\lambda}, \tag{7}\] where \(p\) is the transmitted power in FD transmission, \(\eta\) and \(\lambda\!\in\![0,1]\) are parameters that indicate the attribute of the mitigation technique [11]. The smaller \(\eta\) and \(\lambda\) are the more superior self-interference mitigation is. In this paper, we consider three self-interference parameters \(\lambda\!\in\!\{0.1,\!0.5,\!1\}\) which represent different levels of mitigation. ## III Graph-Based Neural Network design for Power Allocation In this section, we propose a general framework based on a novel graph representation for wireless networks with full duplex nodes to maximise the target function in (6). ### _Graph Representation_ In our problem, we can model transmitters and receivers in wireless networks as vertices and the interference between them as edges, see Fig. 1 for example. Since there exists self-interference between transmitters and receivers within the FD transmission, we add a self-loop to these vertices to model such an effect. In the proposed framework, the vertices can possibly perform in HD or FD transmissions. In order to distinguish the working modes for vertices, we propose a novel graph representation method as follows. In the first step, we adopt a similar idea as in [12] by duplicating the vertices working in the FD transmission, and we treat one vertex as the transmitter and the other as the receiver. By doing so, all vertices are converting to HD transmission mode equivalently. Then, we can model the self-interference channel as edges between these two vertices. In the weighted sum-rate maximisation problem, the channels between transmitters and receivers can be generally categorised into communication and interference channels. Since these two quantities play different roles in our objective function (6), it is better to distinguish them in graph modelling. To this end, in the second step, we construct a new graph as shown in Fig. 2 by aggregating the transmitter and receiver pair as a vertex, the self-interference channel and the interference channels between two vertices as a SI edge and an edge, respectively. The quantities such as communication channel information, direct distance, self-interference model information and weights can be regarded as the vertex feature while the interference channel information and interfering distance can be regarded as the edge feature. Fig. 1: D2D Communication Network with full-duplex transmission when \(K\!=\!6\). Let \(\mathcal{V}\) and \(\mathcal{E}\) denote the set of vertices and edges of a graph \(G\), respectively. The set of the neighbours of \(v\in\mathcal{V}\) is defined as \(\mathcal{N}(v)\). Let \(V_{v}\) and \(E_{v,u}\) represent vertex features of vertex \(v\) and edge features between vertex \(v\) and vertex \(u\), respectively. With definitions in place, we define the vertex features of the vertices to be \[V_{v}=\begin{cases}h_{v,I(v)},&\text{$w_{v}$,$d_{S}$}\qquad\quad\text{if $v\in \mathcal{T}_{1}$},\\ h_{v,I(v)},&\text{$w_{v}$,$d_{v,I(v)}$},\quad\text{if $v\in\mathcal{T}_{2}$}.\end{cases} \tag{8}\] where \(h_{v,u}\in\mathbb{C}\) and \(d_{v,u}\in\mathbb{R}\) are the channel coefficient and distance between \(v\)-th transmitter and \(u\)-th receiver, \(w_{v}\) is the weight for \(v\)-th transmitter, and \(d_{S}\in\mathbb{R}\) is the distance between interfering antennas in full-duplex nodes. We define the edge features to be \[E_{v,u}=\begin{cases}z_{v},&\text{$z_{u}$,$d_{S}$,$d_{S}$},&\text{if $v$,$u\in \mathcal{T}_{1}$,$u=I(v)$},\\ h_{u,I(v)},&\text{$h_{v,I(u)}$,$d_{u,I(v)}$,$d_{v,I(u)}$}&\text{otherwise}.\end{cases} \tag{9}\] where \(z_{v}\in\mathbb{C}\) is the self-interference for \(v\)-th receiver. ### _Graph Neural Networks_ GNNs are firstly proposed to process graph-based data since they can exploit the graph structure of data to extract useful information. Specifically, the vertex updates its embedding feature vector by using the information from its previous layer and the aggregated information Fig. 2: Graphical model of Fig. 1 from its neighbour. Mathematically, the update rule of the \(l\)-th layer at vertex \(v\) in GNNs is given as follows [13]: \[\begin{split}&\alpha_{v}^{(l)}\!=\text{AGGREGATE }^{\ (l)}\big{(}\big{\{}m_{u}^{(l-1)}\!:\!u\!\in\!\mathcal{N}(v)\big{\}}\big{)},\\ & m_{v}^{(l)}\!=\text{ COMBINE }^{(l)}\big{(}m_{v}^{(l-1)},\!\alpha_{v}^{(l )}\big{)},\end{split} \tag{10}\] where \(\alpha_{v}^{(l)}\) is the aggregated feature vector by vertex \(v\) from its neighbours and \(m_{v}^{(l)}\) is the embedding feature vector of vertex \(v\) at the \(l\)-th layer. AGGREGATE \({}^{(l)}\) and COMBINE \({}^{(l)}\) are two functions defined by the user of the GNN. Owing to the universal approximation capability of MLP [14], we will adopt MLP for both aggregating information from a local graph-structured neighbourhood and combining its own features with the aggregated information. Besides, the aggregation step is expected to retain the permutation invariance property, i.e., the aggregated information is invariant for different input orders of neighbour vertices. We can achieve this by using a permutation-invariant function after the MLP, such as sum and max operations, to combine a set of aggregated information from its neighbour into a single set. Here, we choose sum as it can preserve all the information. The updating rule of the proposed GNN at \(l\)-th layer is given by \[\begin{split}&\alpha_{v}^{(l)}\!=\!\text{SUM}\big{(}\big{\{}f_{A} \big{(}m_{u}^{(l-1)},\!E_{vu}\big{)},\!\forall u\!\in\!\mathcal{N}(v)\big{\}} \big{)},\\ & p_{v}^{(l)}\!=\!f_{C}\big{(}\alpha_{v}^{(l)},\!m_{v}^{(l-1)} \big{)},\end{split} \tag{11}\] where \(\alpha_{v}^{(l)}\) represents the aggregated information by vertex \(v\) from its neighbours, \(m_{v}^{(l)}\!=\!\{V_{v},\!p_{v}^{(l)}\}\) represents the embedding feature vector of vertex \(v\), \(p_{v}^{(l)}\) represent the allocated power for vertex \(v\) and \(f_{A}\), \(f_{C}\) are two 3-layer fully connected neural networks with hidden sizes \(\{8,\!16,\!32\}\) and \(\{36,\!16,\!1\}\), respectively. Here, we initialise the power \(p_{v}^{(0)}\!=\!P_{\max}\). An illustration of the F-GNN structure is shown in Fig. 3. ## IV Experiments and Result In this section, we provide simulation results for three types of self-interference models to demonstrate the benefits of the proposed F-GNN architecture. ### _Simulation Setup_ We consider a channel with large-scale fading and Rayleigh fading as in [15]. For the system setup, the channel state information (CSI) is formulated as, \(h_{v,u}\!=\!\sqrt{\frac{1}{1+d_{v,u}^{2}}}r_{v,u}\), where \(r_{v,u}\sim\mathcal{CN}(0,\!1)\) and \(d_{v,u}\) is the distance between \(v\)-th transmitter and \(u\)-th receiver. Here, we consider \(K\) users within a \(100\!\times\!100\)\(m^{2}\) area. The transmitters are placed in this area randomly while each receiver is placed randomly within 2\(m\) to 10\(m\) away from the corresponding transmitter. Due to space limitation, this paper mainly focuses on the case when the total number of users with FD transmission \(T_{1}\!=\!0.5K\). However, our proposed algorithm can also be generalised to other scenarios. Since \(\eta\) in (7) has negligible influence on the normalised performance in our experiments, we set \(\eta\) to 0.01 and the distance between interfering antennas in an FD vertex as 40\(cm\)[11]. To achieve a good power allocation strategy, we choose the negative sum rate to be the objective loss function, which can be minimised by the back-propagation algorithm [15]. In particular, the loss function is expressed in (12), where \(\theta\) represents the learnable parameters of the GNN, and \(p_{i}(\theta)\) denotes the power allocation generated by F-GNN. \(\boldsymbol{H}\!=\![\boldsymbol{h}_{1},\!\cdots\!,\!\boldsymbol{h}_{K}]^{T}\) is the channel matrix, where \(\boldsymbol{h}_{i}\!=\![h_{i1},\!\cdots\!,\!h_{iK}],i\!=\!1,\!\cdots,\!K\). We use \(\hat{\mathbb{E}}\) to denote the expectation with respect to the empirical distribution of the channel samples. In practice, we generate 10000 training samples for calculating the empirical loss, and we also generate 1000 testing samples for evaluation. We assumed that full CSI is available to the algorithms. However, our proposed algorithm only needs partial CSI (a subset of full CSI) to achieve reasonably good performance as will be discussed in Section V. We use Adam optimiser [16] with a learning rate of 0.005 in training. To validate the effectiveness of F-GNN, we compare it with the following two algorithms: Fig. 3: The structure of the proposed F-GNN * WMMSE [2]: This is the advanced optimisation-based algorithm for power allocation in wireless networks, also see [3, 4, 15] for references. * Baseline: We allocate the maximum power \(P_{\text{max}}\) for \(L\) pairs which have the \(L\)th-largest communication channel gain among all pairs, while the rest are set as 0. Here we consider \(L\!=\!0.5K\). ### _Performance Comparison_ First, we set \(w_{k}\!=\!1\) in (6), the object function becomes the typical sum rate maximisation problem. The performance result under different self-interference models is shown in Table I. From this, we observe that the proposed F-GNN achieves similar performance under different \(\lambda\), which indicates that the model is robust to different levels of self-interference. This observation also pertains even if we have a larger number of users. For simplicity, we only consider the case with \(\lambda\!=\!0.5\) for the rest of the experiments unless specified. Now, we let \(w_{k}\) be randomly drawn from \(\mathcal{U}(0,\!1)\) and the experimental result is shown in Fig. 4. It can be seen that F-GNN can achieve similar performance to WMMSE. We can also observe that the performance gap increases as \(K\) increases. This is possibly because F-GNN can extract the underlying graph structure better and thus will be more useful when dealing with complicated networks. Besides, this will also induce good generalisation capacities which can be seen in section IV-C. ### _Generalisation_ Apart from the ability to achieve similar performance to WMMSE, another major advantage of F-GNN is the generalisation capability. We first train the F-GNN with \(K\!=\!50\), then we set the number of users in the test set to \(\{70,100,120,150\}\) while the area remains the same. The generalisation results are shown in Table II. Overall, the proposed model exhibits good generalisation capacities in terms of different users number. Even when the density is three times larger than the training set, F-GNN still slightly outperforms WMMSE. ### _Time Complexity_ From the perspective of the time complexity, in each layer, each edge passes through the neural network \(f_{A}\) and sum operation while each vertex goes through the neural network \(f_{C}\). Thus, the time complexity for GNN in each layer is \(\mathcal{O}(|\mathcal{V}|+|\mathcal{E}|)\)[4]. The average testing times for WMMSE and F-GNN under the same experimental settings are shown in Table III. We \begin{table} \begin{tabular}{|c|c|c|c|} \hline & \(K\!=\!70\) & \(K\!=\!100\) & \(K\!=\!120\) & \(K\!=\!150\) \\ \hline GNN & \(100.8\%\) & \(100.8\%\) & \(100.7\%\) & \(100.6\%\) \\ \hline \end{tabular} \end{table} TABLE II: Average sum rate under the full-duplex network. The sum rate is normalised by the performance from WMMSE for each \(K\). \begin{table} \begin{tabular}{|c|c|c|c|} \hline & \(\lambda\!=\!0.1\) & \(\lambda\!=\!0.5\) & \(\lambda\!=\!1\) \\ \hline \(K\!=\!30\) & \(100.4\%\) & \(100.1\%\) & \(100.7\%\) \\ \hline \(K\!=\!50\) & \(100.8\%\) & \(100.8\%\) & \(100.7\%\) \\ \hline \(K\!=\!70\) & \(100.7\%\) & \(100.8\%\) & \(100.9\%\) \\ \hline \(K\!=\!120\) & \(100.9\%\) & \(100.9\%\) & \(101.1\%\) \\ \hline \(K\!=\!150\) & \(101.1\%\) & \(101.2\%\) & \(101.2\%\) \\ \hline \end{tabular} \end{table} TABLE I: Average sum rate that is normalised by the performance from WMMSE under different self-interference models. Fig. 4: Average weighted sum rate under the full-duplex network. observed that F-GNN is notably faster than WMMSE. As \(K\) increases, F-GNN becomes more efficient compared to WMMSE. ## V Threshold In Section IV, we assume the networks are fully connected. However, as the interference will decay as the distance increases, we conduct similar experiments but exclude edges whose distances are greater than a specific threshold \(t\) meters and investigate the impact of the threshold on both sum-rate performance and time complexity. In particular, we derive the explicit relationship between the threshold and the expected time complexity. To provide a performance upper bound, we conduct the WMMSE with the untruncated networks. ### _Performance and Training Time_ We apply the same threshold to both the training and test set for edge truncation. Here we generate 14 different training and test sets, each with different \(t\) ranging from 10 to 140 \(m\). The performance of different thresholds under \(K\!=\!50\) is shown in Table IV. When the threshold increases, fewer edges will be truncated and the graph will maintain more information from the original graph. Training with a graph closer to the original one will likely have better performance but will have longer training time as more edges are included. Hence, there is a trade-off between performance and time complexity. We also noticed that both the performance and time complexity become stable after \(t\!=\!100m\). We found out that around \(3\%\) of the total edges are within \(t\!=\!100\) and \(t\!=\!140m\) through experiments and it resulted in a negligible effect. Therefore, We only consider \(t\) between 10 and 100\(m\) in the next section. \[F(z)\!=\!\begin{cases}-\frac{8}{3a^{3}}z^{\frac{3}{2}}\!+\!\frac{\pi z}{a^{2} }\!+\!\frac{z^{2}}{2a^{4}}&\text{if }\,0\!<\!z\!\leqslant\!a^{2}\\ \frac{1}{3}\!-\!\frac{2+\pi}{a^{2}}z\!+\!\frac{4}{a^{2}}\big{(}\arcsin\Big{(} \frac{a}{\sqrt{z}}\Big{)}z\!+\!a\sqrt{z-a^{2}}\big{)}\!+\!\frac{8}{3a^{3}}(z-a ^{2})^{\frac{3}{2}}\!-\!\frac{z^{2}}{2a^{4}}&\text{if }\,a^{2}\!<\!z\!\leqslant\!2a^{2}\end{cases} \tag{13}\] \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \(K\!=\!50\) & \(K\!=\!100\) & \(K\!=\!500\) & \(K\!=\!1000\) \\ \hline GNN & \(0.92\) & \(1.36\) & \(10.39\) & 41.05 \\ \hline WMMSE & \(65.63\) & \(260.13\) & \(5997.23\) & 24430.73 \\ \hline \end{tabular} \end{table} TABLE III: Average testing time in milliseconds for the F-GNN under different settings. ### _Performance Comparison_ We further investigate the effect of the threshold by applying it to different \(K\). The performance comparisons are shown in Fig. 5, where the results for each \(K\) are normalised over the sum rate achieved by WMMSE. Overall, our model can achieve reasonably high performance (e.g, 95\(\%\) of the sum rate achieved by WMMSE) when the threshold is greater than 30. Given a specific \(K\), we define the "ideal threshold" to be the smallest threshold that can achieve 95\(\%\) performance of WMMSE. The ideal threshold for different \(K\) is shown in Table V. We also observe that when \(K\) increases, the ideal threshold decreases. This is because the graph network becomes denser as the number of users increases, and thus F-GNN has enough interference to learn even with a lower threshold. From Table IV, we observe that the training time increases as the threshold increases. Therefore, selecting an ideal threshold helps reduce the training time while simultaneously maintaining good performance. For example, select \(t\!=\!20m\) as the ideal threshold for \(K\!=\!50\), we could still achieve \(95\%\) of the optimal performance but reduces required training time by roughly \(20\%\). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline t & \(10\) & \(20\) & \(30\) & \(40\) & \(50\) & \(60\) & \(70\) & \(80\) & \(90\) & \(100\) & \(110\) & \(120\) & \(130\) & \(140\) \\ \hline GNN(\(\%\)) & \(91.05\) & \(96.0\) & \(97.9\) & \(99.3\) & \(99.6\) & \(99.8\) & \(100.1\) & \(100.3\) & \(100.5\) & \(100.6\) & \(100.7\) & \(100.7\) & \(100.7\) & \(100.8\) \\ \hline Time(\(s\)) & \(510\) & \(544\) & \(564\) & \(583\) & \(603\) & \(620\) & \(630\) & \(641\) & \(650\) & \(658\) & \(660\) & \(662\) & \(664\) & \(665\) \\ \hline \end{tabular} \end{table} TABLE IV: Normalised performance and training time versus threshold. Fig. 5: Performance versus threshold ### _Time Complexity_ As shown in Section IV-D, the time complexity hinges on the number of edges in the graph. we use the convention that capital letters denote the random variables and small letters as their realizations. Since the number of edges decreases as the threshold decreases, the time complexity inevitably depends on the threshold. To approximately analyse such a relation, we assume the transmitter and receiver pair correlated. Since the transceiver pair are close to each other, we use the median point \(M_{i}\) between them to represent \(i\)-th transceiver pair. We denote the squared distance between the \(i\)-th and \(j\)-th transceiver pair by \(Z_{i,j}\!=\!\left(X_{i}\!-\!X_{j}\right)^{2}\!+\!\left(Y_{i}\!-\!Y_{j}\right)^ {2}\), where \(X_{i}\),\(Y_{i}\) and \(X_{j}\),\(Y_{j}\) are the coordinates of \(M_{i}\) and \(M_{j}\), respectively. We assume that \(X_{i}\),\(X_{j}\),\(Y_{i}\) and \(Y_{j}\) are i.i.d drawn from uniform distribution \(\mathcal{U}(0,a)\), the cumulative distribution function of \(Z_{i,j}\) for any two transceiver pairs is characterized in (13). We then define \(N(t)\) as the number of edges left after the truncation with the threshold \(t\). Suppose the total number of vertices is \(|\mathcal{V}|\), we have \[N(t)\!=\!\sum_{i=1}^{|\mathcal{V}|-1}\sum_{j=i+1}^{|\mathcal{V}|}\mathbb{1} \big{(}Z_{i,j}\!<\!t^{2}\big{)}, \tag{14}\] where \(\mathbb{1}\) is an indicator function. Since the probability \(P(\mathbb{1}\!\left(Z_{i,j}\!<\!t^{2}\right)\!=\!1)\!=\!P(Z_{i,j}\!<\!t^{2})\!= \!F(t^{2})\), the expectation is \(\mathbb{E}[\mathbb{1}\left(Z_{i,j}\!<\!t^{2}\right)]\!=\!1\!\cdot\!P(Z_{i,j}\!< \!t^{2})\!=\!F(t^{2})\). Since \(Z_{i,j}\) has the same distribution for any two transceiver pairs, the expected number of edges left after the truncation with the threshold \(t\) is given by \[\mathbb{E}[N(t)]\!=\!\frac{|\mathcal{V}|(|\mathcal{V}|\!-\!1)}{2}\!\cdot\!F \big{(}t^{2}\big{)}\!=\!|\mathcal{E}|\!\cdot\!F\big{(}t^{2}\big{)}. \tag{15}\] Since the time complexity for each layer is \(\mathcal{O}(|\mathcal{V}|\!+\!|\mathcal{E}|)\) and it is dominated by \(|\mathcal{E}|\), the expected time complexity after truncation with the threshold \(t\) will decrease by factor of \(F(t^{2})\). Therefore, we could reduce the running time by introducing the threshold. ## VI Conclusion We proposed a novel graph representation for full-duplex transmission in power allocation problems and designed the corresponding graph neural network to find the optimal solution. The \begin{table} \begin{tabular}{|c|c|c|c|} \hline & \(K\!=\!20\) & \(K\!=\!50\) & \(K\!=\!100\) \\ \hline Threshold (\(m\)) & \(\approx\!30\) & \(\approx\!20\) & \(\approx\!15\) \\ \hline \end{tabular} \end{table} TABLE V: Ideal threshold results showed that our algorithm has similar (and even slightly better) performance compared to traditional methods with less computation time. and strong generalisation capacities. Furthermore, we introduced an ideal threshold to reduce time complexity while maintaining good performance and derived the analytic expression for expected time complexity for modelling such an effect.
2302.01020
Meta Learning in Decentralized Neural Networks: Towards More General AI
Meta-learning usually refers to a learning algorithm that learns from other learning algorithms. The problem of uncertainty in the predictions of neural networks shows that the world is only partially predictable and a learned neural network cannot generalize to its ever-changing surrounding environments. Therefore, the question is how a predictive model can represent multiple predictions simultaneously. We aim to provide a fundamental understanding of learning to learn in the contents of Decentralized Neural Networks (Decentralized NNs) and we believe this is one of the most important questions and prerequisites to building an autonomous intelligence machine. To this end, we shall demonstrate several pieces of evidence for tackling the problems above with Meta Learning in Decentralized NNs. In particular, we will present three different approaches to building such a decentralized learning system: (1) learning from many replica neural networks, (2) building the hierarchy of neural networks for different functions, and (3) leveraging different modality experts to learn cross-modal representations.
Yuwei Sun
2023-02-02T11:15:07Z
http://arxiv.org/abs/2302.01020v2
# Meta Learning in Decentralized Neural Networks: ###### Abstract Meta-learning usually refers to a learning algorithm that learns from other learning algorithms. The problem of uncertainty in the predictions of neural networks shows that the world is only partially predictable and a learned neural network cannot generalize to its ever-changing surrounding environments. Therefore, the question is how a predictive model can represent multiple predictions simultaneously. We aim to provide a fundamental understanding of learning to learn in the contents of Decentralized Neural Networks (Decentralized NNs) and we believe this is one of the most important questions and prerequisites to building an autonomous intelligence machine. To this end, we shall demonstrate several pieces of evidence for tackling the problems above with Meta Learning in Decentralized NNs. In particular, we will present three different approaches to building such a decentralized learning system: (1) learning from many replica neural networks, (2) building the hierarchy of neural networks for different functions, and (3) leveraging different modality experts to learn cross-modal representations. ## Progress to Date Common sense is not just facts but a collection of models of the world. The global workspace theory [1] demonstrated that in the human brain, multiple neural network models cooperate and compete in solving problems via a shared feature space for common knowledge sharing, which is called the global workspace (GW). Within such a learning framework, using different kinds of metadata about individual neural networks such as measured performance and learned representations, shows the potential to learn, select, or combine different learning algorithms to efficiently solve a new task. The learned knowledge or representations from different neural network areas are leveraged for reasoning and planning. Therefore, we termed this research direction as Meta Learning in Decentralized Neural Networks which studies how a meta agent can solve novel tasks by observing and leveraging the world models built by these individual neural networks. We present the three different approaches to building such a decentralized learning system: (1) learning from many replica neural networks (2) building the hierarchy of neural networks, and (3) leveraging different modality experts. ## Learning from Many Replica Neural Networks The proliferation of AI applications is reshaping the contours of the future knowledge graph of neural networks. Decentralized NNs is the study of knowledge transfer from different individual neural networks trained on separate local tasks to a global model. In a learning system comprising many replica neural networks with similar architecture and functions, the goal is to learn a global model that can generalize to unseen tasks without large-scale training [2]. In particular, we studied two practical problems in Decentralized NNs, i.e., learning with non-independent and identically distributed (non-iid) data and multi-domain data. Notably, non-iid refers to data samples across local models are not from the same distribution, which hinders the knowledge transfer between local models. To tackle the non-iid problem, we proposed the Segmented-Federated Learning (Segmented-FL) [2] that employs periodic local model performance evaluation and learning group segmentation that brings neural networks training over similar data distributions together. Then, for each group, we train a different global model by transferring knowledge from the local models in the group. The global model can only passively observe the local model performance without access to the local data. We showed that the proposed method achieved better performance in tackling non-iid data of intrusion detection tasks compared to the traditional federated learning [10]. On the other hand, multi-domain refers to data samples across local models are from different domains with domain-specific features. For example, an autonomous vehicle that learns to drive in a new city might leverage the driving data of other cities learned by different vehicles. Since different cities have different street views and weather conditions, it would be difficult to directly learn a new model based on the knowledge of the models trained on multi-domain data. This problem is closely related to multi-source domain adaptation, which studies the distribution shift in features inherent to specific domains that bring in negative transfer degrading a model's generality to unseen tasks. To this end, we proposed a new domain adaptation method that reduces feature discrepancy between local models and improves the global model's generality to unseen tasks [22]. We devised two components of embedding match ing and global feature disentangler to align learned features of different local models such that the global model can learn better-refined domain-invariant features. Moreover, we found that a simple voting strategy that produces multiple predictions and generates pseudo-labels based on the consensus of local models could further improve the global model performance. The results of both image classification tasks and a natural language sentiment classification task showed that the proposed domain adaptation method could greatly improve the transfer learning of local models. ### Building the Hierarchy of Neural Networks Hierarchical neural networks consist of multiple neural networks concreted in a form of an acyclic graph. An early theory of the global workspace theory (GWT) [1] refers to multiple neural network models cooperating and competing in solving problems via a shared feature space for common knowledge sharing. Built upon the GWT, the conscious prior theory [1] demonstrated the sparse factor graphs in space of high-level semantic variables and simple mapping between high-level semantic variables. To study the hierarchy of neural networks, we proposed homogeneous learning for self-attention decentralized deep learning [22]. In particular, we devised a self-attention mechanism where a local model is selected as the meta for each training round and leverages reinforcement learning to recursively update a globally shared learning policy. The meta observes the states of local models and its surrounding environment, computing the expected rewards for taking different actions based on the observation. As mentioned in [1], with a model of external reality and an agent's possible actions, it can try out various alternatives and conclude which is the best action using the knowledge of past events. The goal is to learn an optimized learning policy such that the Decentralized NNs systems can quickly solve a problem by planning and leveraging different local models' knowledge more efficiently. The results showed that the learning of a learning policy greatly reduced the total training time for an image classification task by 50.8%. ### Leveraging Different Modality Experts Information in the real world usually comes in different modalities. The degeneracy [23] in neural structure refers to any single function can be carried out by more than one configuration of neural signals and different neural clusters participate in several different functions. Intelligence systems build models of the world with different modalities where spatial concepts are generated via modality models. We demonstrate cross-modal learning in multimodal models [1]. Notably, we studied the Visual Question Answering (VQA) problem based on self-supervised learning [24]. By leveraging the contrastive learning of different model components, we aimed to align the modality representations encouraging the similarity of the relevant component outputs while discouraging the irrelevant outputs. Such that the learning framework learns better-refined cross-modal representations for unseen VQA tasks based on the knowledge learned from different VQA tasks of local models. ## Anticipated Progress The vast majority of current neural networks lack sophisticated logical reasoning and action-planning modules. We aim to study a neuro-symbolic approach to improving the explainability and robustness of knowledge sharing in the Global Workspace (GW) of Decentralized NNs. Furthermore, we consider there are several necessary components for building such a neuro-symbolic learning framework, i.e., causal models and probabilistic Bayesian neural networks [10], and associative memory like Hopfield Network [12]. In particular, we aim to tackle the tasks of visual grounding such as visual question answering and image captioning. In this regard, we will revisit and reintegrate the classical symbolic methods into the decentralized neural networks theory to improve the hierarchical reasoning of the meta agent for leveraging different modality expert models. The anticipated contribution is establishing a new learning framework to perform efficient causal discovery and inferences based on decentralized neural networks for improving generality in visual language modeling.
2303.07778
GANN: Graph Alignment Neural Network for Semi-Supervised Learning
Graph neural networks (GNNs) have been widely investigated in the field of semi-supervised graph machine learning. Most methods fail to exploit adequate graph information when labeled data is limited, leading to the problem of oversmoothing. To overcome this issue, we propose the Graph Alignment Neural Network (GANN), a simple and effective graph neural architecture. A unique learning algorithm with three alignment rules is proposed to thoroughly explore hidden information for insufficient labels. Firstly, to better investigate attribute specifics, we suggest the feature alignment rule to align the inner product of both the attribute and embedding matrices. Secondly, to properly utilize the higher-order neighbor information, we propose the cluster center alignment rule, which involves aligning the inner product of the cluster center matrix with the unit matrix. Finally, to get reliable prediction results with few labels, we establish the minimum entropy alignment rule by lining up the prediction probability matrix with its sharpened result. Extensive studies on graph benchmark datasets demonstrate that GANN can achieve considerable benefits in semi-supervised node classification and outperform state-of-the-art competitors.
Linxuan Song, Wenxuan Tu, Sihang Zhou, Xinwang Liu, En Zhu
2023-03-14T10:39:58Z
http://arxiv.org/abs/2303.07778v1
# GANN: Graph Alignment Neural Network for Semi-Supervised Learning ###### Abstract Graph neural networks (GNNs) have been widely investigated in the field of semi-supervised graph machine learning. Most methods fail to exploit adequate graph information when labeled data is limited, leading to the problem of over-smoothing. To overcome this issue, we propose the Graph Alignment Neural Network (GANN), a simple and effective graph neural architecture. A unique learning algorithm with three alignment rules is proposed to thoroughly explore hidden information for insufficient labels. Firstly, to better investigate attribute specifics, we suggest the feature alignment rule to align the inner product of both the attribute and embedding matrices. Secondly, to properly utilize the higher-order neighbor information, we propose the cluster center alignment rule, which involves aligning the inner product of the cluster center matrix with the unit matrix. Finally, to get reliable prediction results with few labels, we establish the minimum entropy alignment rule by lining up the prediction probability matrix with its sharpened result. Extensive studies on graph benchmark datasets demonstrate that GANN can achieve considerable benefits in semi-supervised node classification and outperform state-of-the-art competitors. 1College of Computer, National University of Defense Technology 2College of Intelligence Science and Technology, National University of Defense Technology ## 1 Introduction Graph semi-supervised learning is an important research topic in the era when graph data rapidly accumulate while data labeling is unaffordably expensive and time-consuming. To provide good performance with little human guidance, graph semi-supervised algorithms make large efforts to exploit hidden information within the data. However, lacking of enough supervision further aggravates the over-smoothing problem of graph neural networks. As an example, graph convolutional networks (GCNs)[12] use the laplacian matrix to extract features and excel at link prediction and clustering. When employing more than two layers of GCN, it may fuse node features from different clusters and make them hard to tell apart, i.e., oversmoothing. The existing methods usually exploits either structural information or attribute information to eliminate the over-smoothing problem. Inspired by the success of image restoration, Graph Completion[23] conceals specific nodes of the input graph by deleting their features. Additionally, AttributeMask[14, 15] aims to reconstruct the dense feature matrix processed by principal component analysis. By mining the attributes, each of the above methods can produce a less collapse-prone node representation. The following papers pertain to the extraction of structural information. APPNP[16] uses personalized pagerank with GCNs and comes up with a sophisticated global propagation scheme. Based on APPNP, the ADAGCN[24] incorporates AdaBoost into graph neural networks for multilayer aggregation, which can employ higher-order neighbor knowledge for node representation construction. Although the oversmoothing problem has been largely eliminated by the mentioned methods, they fail to exploit the two kinds of essential information for further performance improvement. Firstly, the majority of GNN models do not associate the representation with the original attributes following multiple convolutions. They pay less attention to the relationship between the final node's representation and the original data, resulting in the underutilization of characteristics. Secondly, the vast majority of the aforementioned articles can only aggregate first- or second-order neighbor information using the adjacency matrix. Nonetheless, a number of investigations have demonstrated that the adjacency matrix's node connections are not always meaningful, and nodes from different clusters may be connected. This necessitates models with the capacity to dig deep into the structural information in order to build more trustworthy node representations. To solve the mentioned problems, we propose a generalised graph alignment neural network (GANN) on the basis of the ADAGCN[24] learning framework. It aligns the intermediate network representations with three kinds of information landmarks for hidden supervision exploitation. To fully investigate the attributes, we first propose the feature alignment rule based on the notion of graph autoencoder. We enrich node representation by aligning the feature correlation matrix with the embedding correlation matrix. Then, to maximize the utilization of the multi-hop graph structure and labeling information, we suggest the cluster center alignment rule. We compute the inner product of cluster-centered embedding and align it with the unit ma trix. By minimizing intra-cluster distance and eliminating inter-cluster noise, we prevent the model from becoming too smooth during deep training. Finally, we present the minimum entropy alignment rule to generate more reliable outcomes in semi-supervised tasks. We force the model to generate predictions with low entropy by aligning the prediction probability matrix with its sharpened results[1]. In the experimental section, we demonstrate the effectiveness of GANN through comprehensive algorithm comparisons and ablation studies. Experimental results show that our proposed model, GANN, outperforms the state-of-the-art approach on the benchmark datasets. The contributions of this paper are as follows: * We propose GANN, a model that can alleviate the over-smoothing phenomenon in graph neural networks by fully exploiting the essential information of graphs. * We propose three alignment rules: feature alignment rule, cluster center alignment rule, and minimum entropy alignment rule. From the views of the attribute, structural, and prediction result optimization, they can leverage graph information and labeled data to generate quality node representations. * We perform node classification experiments on the benchmark datasets with labels of varying labeling rates, and all datasets offer improved results. In addition, the validity of alignment rules is examined to verify that the proposed GANN can adequately use the essential information of graph data. ## 2 Related Work ### Graph Semi-Supervised Learning Semi-supervised tasks[13, 14] require only a small quantity of labeling information, making full use of the vast amount of unlabeled information, and can offer substantial improvements in practice. Graph semi-supervised[15] learning has gained a lot of attention due to its unique structure and wide range of applications. There are some semi-supervised algorithms[16, 17, 18] that can partially address the problem of oversmoothing phenomenon. GCN-Cheby[15] uses CNNs in spectral graph theory to create fast localized convolutional filters on graphs. GAT[14] uses masked self-attentional layers to improve graph convolution-based algorithms. Jumping knowledge networks with concatenation (JK-Net)[11]: introduces jump connections into the final aggregation mechanism to extract knowledge from different graph convolutional layers. GPRCNN[16] introduces a new Generalized PageRank GNN architecture that adaptively trains GPR weights to concurrently improve node feature and topological information extraction independent of node label homophily or heterophily. MixupForGraph[17] utilizes the representations of each node's neighbors prior to Mixup for graph convolutions. SGC[19] simplifies models by reducing nonlinearities and collapsing weight matrices between layers. GWNN[11] uses graph wavelet transform to improve on graph Fourier-based spectral graph CNN algorithms. PPNP and APPNP[14] rely mainly on the PageRank concept to solve the deeper adjacency information, which is difficult to obtain with graph neural networks. Experiments are conducted on several datasets with varying labeling rates to evaluate the efficacy of our proposed model in comparison to the aforementioned related methods. ### Graph Neural Networks GCN[14] is the first method to take CNN from euclidean to graph domains. It can be explained in both spectral and spatial fields, leading to the derivation of articles in two directions. Common spatial fields include GAT[14], TAAGCN(?), etc. In the spectral domain, such as FAGCN[1], NFCGCN[16], etc. However, none of the methods with GCN as the base component can exploit the higher-order neighbor information because of the collapse phenomenon after their multilayer aggregation. Currently, multilayer perceptrons (MLPs) are widely utilized by the public due to their adaptability and efficacy. Zhu[15] develops MLP-based networks which perform comparably to visual networks in the field of images. Experimental validation in Zhang[16] demonstrates that both GNN and MLP have sufficient expressiveness when the dimension of the representation is sufficiently large. We applied a simple MLP layer as the main structure in the model. It is proved that good feature mining results can be achieved with MLP only. GAE[14] is one of the prevalent methods for generating tasks. It employs GCN to encode a model implicitly and the decoder uses inner products. To rebuild the original graph structure, the inner product of the implicit representation is aligned with the original adjacency matrix. Graph autoencoders have begun to be widely explored, with several models employing their concepts. VGAE[14] extends GAE's variational autoencoder. Later, SIG-VAE[17] considers hierarchical variational inference to generate graph data. ARGA/ARVGA[17] standardized GAE/V-GAE employing GANs. We refer to the idea of adjacency matrix alignment in the traditional structure of GAE for model design and apply it to the part of feature data mining. ### Oversmoothing Phenomenon Oversmoothing is generated by the convolution of many layers of GCN, with Li[15] first proposing it. The node representation of various clusters is too close, affecting model performance. Xu[11] mentions that the oversmoothing speed among various types of nodes varies. More edge nodes slow down oversmoothing. The solution in Kipf[14] and Xu[11] is based on ResNets[12], which act by adding residual connections. But the model is less effective. Additionally, there are some algorithms[17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 15, 16, 17, 18, 19, 20, 22, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 1776, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 188, 189, 190, 182, 184, 186, 188, 189, 191, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 231, 242, 243, 244, 245, 246, 247, 248, 251, 261, 272, 281, 290, 209, 211, 200, 212, 203, 204, 205, 206, 207, 208, 209, 211, 201, 202, 207, 209, 212, 231, 242, 243, 245, 246, 247, 248, 251, 262, 263, 264, 265, 266, 267, 268, 269, 270, 281, 291, 202, 203, 204, 205, 206, 207, 209, 211, 208, 209, 212, 209, 221, 209, 222, 231, 242, 243, 245, 246, 247, 248, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 273, 262, 263, 264, 265, 266, 267, 268, 269, 271, 281, 292, 293, 294, 295, 296, 297, 298, 299, 300, 310, 320, 321, 320, 321, 320, 322, 323, 324, 325, 326, 327, 328, 329, 330, 322, 327, 328, 329, 334, 328, 329, 340, 329, 350, 323, 320, 324, 325, 326, 327, 328, 329, 341, 342, 329, 351, 360, 329, 361, 370, 371, 372, 373, 374, 375, 376, 378, 381, 382, 383, 390, 391, 392, 300, 321, 320, 322, 323, 324, 325, 326, 327, 328, 329, 334, 329, 335, 328, 336, 329, 340, 329, 351, 360, 370, 371, 372, 373, 375, 378, 381, 383, 390, 392, 300, 321, 320, 323, 324, 325, 327, 328, 329, 334, 329, 340, 329, 353, 361, 370, 373, 374, 375, 376, 378, 381, 383, 390, 393, 301, 390, 391, 392, 300, 393, 302, 394, 395, 396, 397, 398, 399, 303, 304, 399, 310, 393, 305, 397, 399, 311, 398, 399, 312, 399, 313, 390, 394, 395, 396, 397, 398, 306, 399, 314, 399, 315, 390, 399, 320, 399, 300, 393, 307, 399, 313, 390, 394, 396, 398, 399, 310, 399, 314, 399, 329, 300, 393, 315, 390, 316, 399, 320, 393, 317, 399, 321, 393, 332, 335, 394, 395, 396, 397, 399, 300, 398, 300, 399, 310, 399, 321, 399, 333, 300, 399, 311, 399, 322, 300, 393, 309, 310, 399 2022a, 2023, 2022b) that can alleviate the problem of over-smoothing phenomenon and introduce some of the applications of GNN. By aggregating the first-order neighbors of different frequencies, there is also an innovation in frequency. FAGCN(Bo et al., 2021) makes the central node more informative and increases the difference from other nodes, i.e., by aggregating both high and low-frequency information. ADAGNN(Dong et al., 2021) grades the frequencies and adaptively aggregates them. Another kind of feature extraction of graph data does not use the form of convolution. Instead, they simply employ the ordinary nonlinear activation function to process the data, such as the common MLPs and other model structures. Several studies attempt to tackle the challenge posed by Laplace smoothing via transforming the filter form(Zheng et al., 2021). The structure of our designed model aims to mitigate the impact of the oversmoothing problem in terms of node representation generation. ## 3 Our Proposed Model: GANN In the following chapter, we will first discuss the notations used and task definition, then describe our motivation and the details of the GANN components. ### Preliminaries #### Symbols Definition Given an undirected graph \(\mathcal{G}=(V,\mathcal{E},X)\), with adjacency matrix \(\mathbf{A}\in\mathbb{R}^{N\times N}\), where \(V\) is the set of nodes which has \(N\) samples. \(\mathcal{E}\) is the set of edges. \(\mathbf{X}\in\mathbb{R}^{N\times d}\) is the attribute matrix. The Laplacian matrix of the graph is defined as \(\mathbf{L}=\mathbf{D}-\mathbf{A}\), where \(\mathbf{D}\in\mathbb{R}^{N\times N}\) is a diagonal degree matrix with \(\mathbf{D}_{ii}=\sum_{j}\mathbf{A}_{ij}\). The symmetric normalized Laplacian matrix are defined as \(\mathbf{L}_{sym}=\mathbf{I}-\mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{- \frac{1}{2}}\). We use \(\tilde{\mathbf{A}}=\tilde{\mathbf{D}}^{-\frac{1}{2}}\tilde{\mathbf{A}}\tilde {\mathbf{D}}^{-\frac{1}{2}}\in\mathbb{R}^{N\times N}\) as a frequency filter where \(\tilde{\mathbf{A}}=\mathbf{A}+\mathbf{I}\). \(\tilde{\mathbf{D}}\) is the degree matrix of \(\tilde{\mathbf{A}}\) and \(\mathbf{I}\) denotes the identity matrix. We first normalize \(\mathbf{X}\) by calculating \(\tilde{\mathbf{X}}=Normalize(\mathbf{X})\) and then take the resultant \(\tilde{\mathbf{X}}\) matrix as input. #### Task Definition Graph alignment neural network for semi-supervised learning. The model receives \(\mathbf{X}\) and \(\mathbf{A}\) as inputs, performs iterative training of several layers and then averages the output \(\mathbf{Z}\) of each layer as the final results. #### Motivation **ADAGCN.** An efficient model that combines the data of nodes and their neighbors by integrating the traditional graph neural network with Adaboost. Specifically, the single-layer model structure of ADAGCN(Sun, Zhu, and Lin, 2020) consists of MLPs, the training is usually multilayered. The \(l\)-th layer's input is the \(l\)-hop adjacency matrix of the graph \(\mathcal{G}\) and the attribute matrix. Each layer of the network is structure-identical with shared parameters. In ADAGCN, the result \(\mathbf{Z}\) is given at the end of each layer. After all training layers have been conducted, \(\mathbf{Z}\) is then weighted to generate the final results \(\mathbf{Z}^{\prime}\). The loss function of ADAGCN is formulated as: \[\mathcal{L}_{semi}=-\frac{1}{n}\sum_{i}w_{i}[y_{i}\ln\hat{y}_{i}+(1-y_{i})\ln( 1-\hat{y}_{i})], \tag{1}\] where \(n\) represents the number of labeled samples, \(y_{i}\) and \(\hat{y}_{i}\) represent the predicted label and the true label, respectively. \(w_{i}\) denotes the learnable weight. ADAGCN has been proven superior to GCN in addressing the issue of embedding collapse. **Limitations.** Despite the effectiveness of ADAGCN, we observe that it has the following two limitations: i) Over-looking the underlying meaningful attributes. On the one hand, ADAGCN takes only the attribute matrix \(\mathbf{X}\) as input during the first layer, which results in ignoring the attributes of unlabeled nodes according to its objective function. On the other hand, the adjacency matrix is multiplied with the attributes to correlate non-training samples. This way only uses neighborhood-level knowledge from the adjacency matrix, which is limited for constructing node representations. ii) Ignoring the significant higher-order neighbors. More precisely, after multiplication, adjacency matrix becomes denser. This causes more neighbor noise due to erroneous edges, hindering the model from mining higher-order information in depth. To solve the above issues, we propose GANN to explore the essential information of the graph via three alignment rules, as depicted in Figure 1. ### Alignment Rules In this section, we will introduce three alignment rules of GANN in detail, i.e., feature alignment rule, cluster center alignment rule, and minimum entropy alignment rule. #### Feature Alignment Rule As stated previously, existing models cannot fully leverage original graph attributes. To tackle this problem, we introduce the feature alignment rule, where we align the feature correlation matrix with embedding correlation matrix. Specifically, we first calculate the similarity matrix \(\mathbf{S}\) via \(\hat{\mathbf{X}}\) to obtain the \(0\)-\(1\) feature correlation \begin{table} \begin{tabular}{l|l} \hline Notations & Meaning \\ \hline \(N\) & Sample Number \\ \(d\) & Feature Dimension \\ \(h^{\prime}\) & Embedding Dimension \\ \(L\) & Layer Number \\ \(C\) & Cluster Number \\ \(\mathbf{X}\in\mathbb{R}^{N\times d}\) & Feature Matrix \\ \(\mathbf{A}\in\mathbb{R}^{N\times N}\) & Adjacency Matrix \\ \(\mathbf{I}\in\mathbb{R}^{N\times N}\) & Identity Matrix \\ \(\hat{\mathbf{A}}\in\mathbb{R}^{N\times N}\) & Normalized Adjacent Matrix \\ \(\mathbf{D}\in\mathbb{R}^{N\times N}\) & Degree Matrix \\ \(\hat{\mathbf{X}}\in\mathbb{R}^{N\times d}\) & Normalized Feature Matrix \\ \(\mathbf{H}_{1}^{(l)}\in\mathbb{R}^{N\times h^{\prime}}\) & Node Embedding for the Layer \(l\) \\ \(\mathbf{F}^{{}^{\prime}}\in\mathbb{R}^{N\times N}\) & Feature Correlation Matrix \\ \(\hat{\mathbf{F}}\in\mathbb{R}^{N\times N}\) & Embedding Correlation Matrix \\ \(\bar{\mathbf{E}}\in\mathbb{R}^{C\times h^{\prime}}\) & Cluster Center Matrix \\ \(\hat{\mathbf{E}}\in\mathbb{R}^{h^{\prime}\times h^{\prime}}\) & Cluster Center Correlation Matrix \\ \(\mathbf{Z}\in\mathbb{R}^{N\times C}\) & Prediction Probability Matrix \\ \hline \end{tabular} \end{table} Table 1: Notation Summary matrix \(\mathbf{F}^{\prime}\). In the process of similarity estimation, we adopt the jaccard, cosine, and gaussian kernel approaches. Cosine similarity is finally employed according to the performance comparison, formulated as: \[\mathbf{S}_{ij}=\frac{\hat{\mathbf{x}}_{i}}{\|\hat{\mathbf{x}}_{i}\|_{2}}\cdot \frac{\hat{\mathbf{x}}_{j}}{\|\hat{\mathbf{x}}_{j}\|_{2}}, \tag{2}\] \[\mathbf{f}_{ij}=\begin{cases}\mathbf{S}_{ij},\mathbf{S}_{ij}\geqslant\eta\\ 0,\quad\mathbf{S}_{ij}<\eta\end{cases}, \tag{3}\] \[\mathbf{f}^{\prime}_{i}=topk(\mathbf{f}_{i})\, \tag{4}\] where \(\cdot\) denotes the dot product and \(||\) is the \(L_{2}\) parametrization. \(\mathbf{S}_{ij}\) is an element of the similarity matrix \(\mathbf{S}\), which represents the similarity between node \(i\) and node \(j\). Since the underlying graph structure is sparse, we set the elements of \(\mathbf{S}\) that are smaller than a threshold \(\eta\) to \(0\). In addition, \(topk\) indicates that \(k\) nodes with the highest similarity value of node \(i\) are selected, and the rest are assigned to \(0\). Next, we obtain the embedding correlation matrix \(\hat{\mathbf{F}}\) by performing the inner product of the embedding matrix \(\mathbf{E}\). Specifically, \(\hat{\mathbf{X}}\) is first layer input, while the smoothed adjacency matrix \(\hat{\mathbf{A}}\) and \(\hat{\mathbf{X}}\) form the second layer's input. After the input, the dropout layer is used to increase the model's robustness, and then the MLP layer is employed to generate the embedding matrix, shown below. \[\mathbf{H}^{(l)}_{1}=\sigma_{1}(\mathrm{Dropout}(\hat{\mathbf{X}}\hat{\mathbf{A }}^{(l-1)})\mathbf{W}_{1}), \tag{5}\] \[\mathbf{e}^{(l)}_{i}=\frac{\mathrm{h}^{(l)}_{1i}}{\max(\|\mathrm{h}^{(l)}_{1i }\|_{2},\epsilon)}, \tag{6}\] \[\hat{\mathbf{F}}^{(l)}=\sigma_{2}(\mathbf{E}^{(l)}\mathbf{E}^{(l)^{\top}}), \tag{7}\] where \(l\) denotes the model's layer number. \(l=1,...,L\). \(\mathbf{W}_{1}\in\mathbb{R}^{N\times d}\) is a trainable parameter and \(\mathbf{H}^{(l)}_{1}\in\mathbb{R}^{N\times h^{\prime}}\) is the hidden node representation for the layer \(l\). \(\mathbf{H}^{(l)}_{1}\) contains \(h^{\prime}\) cells and \(\mathbf{h}^{(l)}_{1}\) is normalized to generate \(\mathbf{e}^{(l)}_{i}\), the final node embeddings. \(\sigma_{1}\) and \(\sigma_{2}\) denote the relu and sigmoid activation functions, respectively. \(\epsilon\) is the minimal value. Lastly, we calculate Kullback-Leibler divergence between the feature correlation matrix \(\mathbf{F}^{\prime}\) and embedding correlation matrix \(\hat{\mathbf{F}}\) to measure their difference. As \(\mathbf{F}^{\prime}\) is fixed, so is its entropy value. Consequently, it can be written as a cross-entropy loss function, as follows, \[\mathcal{L}_{f}{}^{(l)}=CE(\hat{\mathbf{F}},\mathbf{F}^{\prime}), \tag{8}\] where \(CE(\cdot)\) denotes the Cross-Entropy loss(Murphy, 2012). By applying this rule and minimizing Eq.(8), the necessary node characteristic for subsequent tasks is fused into the node representation in order to enrich it. Cluster Center Alignment RuleAlthough the feature alignment rule could efficiently employ attribute information to generate node representations, it suffers from the oversmoothing issue. Therefore, we develop the cluster centre alignment rule to fully utilize structural information. In detail, we utilize a few labeled nodes to calculate their cluster-centered embedding matrix \(\hat{\mathbf{E}}\) and align its inner product with a unit matrix via Kullback-Leibler divergence. For instance, we assume that Cora-ML has seven clusters and each cluster contains \(20\) labeled samples, the average embedding of \(20\) samples denotes the cluster-center embedding, which is formulated as follows: \[\bar{\mathbf{e}}^{(l)}_{i}=\frac{\sum_{j\in i}^{m}\mathbf{e}_{j}{}^{(l)}}{m}, \tag{9}\] Figure 1: Illustration of GANN. The model is trained using sequential iteration, as depicted on the figure left. We utilize one MLP layer for feature extraction and one MLP layer with softmax for probability calculation. The parameters for each model structure layer are shared. In addition, the model includes three alignment rules to optimize the results, figure right. Inputs to the model are the attribute matrix and the adjacency matrix of various hops, outputs are the cluster probabilities. \[\hat{\textbf{E}}^{(l)}=\bar{\textbf{E}}^{(l)}{}^{\top}\bar{\textbf{E}}^{(l)}, \tag{10}\] where \(i\) denotes the current cluster and \(m\) means the number of samples in the cluster \(i\). \(\bar{\textbf{E}}^{(l)}\in\mathbb{R}^{C\times h^{\prime}}\) and \(\hat{\textbf{E}}^{(l)}\in\mathbb{R}^{h^{\prime}\times h^{\prime}}\) is the inner product of \(\bar{\textbf{E}}^{(l)}\). \(C\) is the number of clusters. We align the unit matrix with \(\hat{\textbf{E}}\) since the cluster center should keep closer to itself and far away from the other clusters. The loss function is formulated as: \[\mathcal{L}_{e}=\frac{\sum_{i}^{C}{(1-\frac{\hat{\textbf{E}}^{(l)}_{ij}}{C})^{ 2}}}{C}+\lambda\frac{\sum_{i}^{C}{\sum_{j\neq i}^{C}{(\frac{\textbf{E}^{(l)}_{ ij}}{C})^{2}}}}{C(C-1)}. \tag{11}\] In Eq.(11), \(\lambda\) is a hyper-parameter that is employed to coordinate the cluster-center embeddings with other noise, which alleviates the oversmoothing issue by using cluster centers with labeled data. In our view, deterministic criteria improve model accuracy. Hence, it gets rid of the problem that the node representation cannot make full use of the graph structure information. ### Minimum Entropy Alignment Rule The first two alignment rules optimize node representations by mining graph attribute and structure data. However, it still poses a great challenge to create an accurate prediction probability matrix with a small number of labels for semi-supervised tasks. To overcome this problem, we propose a minimum entropy alignment rule by aligning the prediction probability matrix and its sharpened(Berthelot et al., 2019) results. Specifically, we first calculate \(\textbf{H}_{2}^{(l)}\) using an MLP layer to ensure that the dimensions of the learned embeddings and clusters are constant. The prediction probability matrix \(\textbf{Z}^{(l)}\in\mathbb{R}^{N\times C}\) is then generated using softmax. Moreover, we calculate the cluster relationship probabilities of all samples through a sharpening operation to get the sharpened matrix \(\hat{\textbf{Z}}\). \[\textbf{H}_{2}^{(l)}=\sigma_{1}(\textbf{H}_{1}^{(l)}\textbf{W}_{2}), \tag{12}\] \[\textbf{z}_{i}^{(l)}=\frac{\textbf{e}^{h_{zi}^{(l)}}}{\sum_{j=1}^{N}{e^{h_{zi} ^{(l)}}}}, \tag{13}\] \[\hat{\textbf{Z}}_{ij}^{(l)}=\exp(\textbf{Z}_{ij}^{(l)})^{\frac{1}{k\epsilon m }})/\sum_{c=1}^{C}\exp(\textbf{Z}_{ic}^{(l)^{\frac{1}{k\epsilon m}}}). \tag{14}\] In Eq.(14), the cluster of node \(i\) indicates \(c\), ranging from \(1\) to \(C\). \(tem\) refers to a hyper-parameter, which controls the sharpness of the categorical distribution. Next, we decrease the distance between the prediction probability matrix \(\textbf{Z}\) and the sharpened result \(\hat{\textbf{Z}}\) to optimize the network. The rule loss function is formulated as: \[\mathcal{L}_{me}=\frac{1}{N}\sum_{i=1}^{N}||\textbf{z}_{i}^{(l)}-\hat{\textbf {z}}_{i}^{(l)}||_{2}^{2}. \tag{15}\] Grandvalet(Grandvalet and Bengio, 2004) has shown that unlabeled examples are mostly beneficial when clusters have small overlap. We optimize the results by reducing the entropy of the prediction, which increases the confidence of the classification results. ### Overview of Our Proposed GANN In this subsection, we formally define the whole architecture of GANN. The mathematical formulation of GANN is defined as: \[\left\{\begin{array}{l}\textbf{H}_{1}^{(l)}=f_{1}(\hat{\textbf{X}}\hat{ \textbf{A}}^{(l-1)}),\\ \textbf{H}_{2}^{(l)}=f_{2}(\textbf{H}_{1}^{(l)}),\\ \textbf{Z}^{(l)}=\log(Softmax(\textbf{H}_{2}^{(l)})),\\ \textbf{Z}=\frac{\textbf{Z}^{(1)}+\cdots+\textbf{Z}^{(l)}+\cdots+\textbf{Z}^{ (L)}}{L},\end{array}\right. \tag{16}\] where \(f_{1}\) denotes the nonlinear function part of the above Eq.(5), \(f_{2}\) denotes the function part in Eq.(12). \(l\) is the number of the current network layer, ranging from \(1\) to \(L\). The parameters of each model layer are shared and updated sequentially. Note that we adopt the optimal learned weight matrix of the \(l\)-th layer to initialize that of the \((l+1)\)-th layer. The training optimal weight coefficients of the previous layer initialize the next. The final loss function is formulated as: \[\mathcal{L}=\mathcal{L}_{semi}+\mathcal{L}_{e}+\beta*\mathcal{L}_{f}+\gamma* \mathcal{L}_{me}. \tag{17}\] Specifically, we follow the strategy of weighted sum in ADAGCN to obtain \(\mathcal{L}_{semi}\). In addition, \(\beta\) and \(\gamma\) is utilized as a learnable parameter to coordinate the all objectives. The detailed learning procedure of the proposed GANN is shown in Algorithm 1. ## 4 Experiments ### Experimental Setup **Datasets.** To demonstrate the effectiveness of the proposed GANN, we conduct extensive experiments on eight datasets, including DBLP *, ACM \({}^{\dagger}\), AMAP(Tu et al., 2021), AMAC(Tu et al., 2021), Cora-ML(Abu-El-Haija et al., 2020), CiteSeer(Shchur et al., 2018), PubMed(Namata et al., 2012), and MS-Academic(Sen et al., 2008). Footnote *: [https://dblp.uni-trier.de](https://dblp.uni-trier.de) DBLP is a network structure that connects several authors from diverse domains, including machine learning, database, etc. The edge connection indicates whether they are co-authors. ACM is a network of papers, and an edge connection is created when the same author is responsible for multiple publications. AMAP is a graph of Amazon product purchases, where nodes represent products and edges indicate if they are frequently purchased together. AMAC is comparable to AMAP and is included in Amazon's graph of often purchased items. The last four of them are text classification datasets. Cora-ML, CiteSeer and PubMed are citation graphs. In other words, each node inside the network represents an article, while the edge connecting nodes denotes their citation relationship. The edge link between network nodes in the MS-Academic dataset shows co-authorship. Besides, Table 2 summarizes the data statistics. **Baselines.** We choose thirteen representative models from the relevant directions, and they can be categorized into three types. Concretely, the first type works with graph neural networks: GCN (with early stopping), V.GCN(Kipf and Welling, 2016), GCN-Cheby(Defferrard et al., 2016) and GAT(1). Besides, the second type focuses on specific feature usage: FAGCN(Bo et al., 2021), SGC(Wu et al., 2019) and MixupForGraph(Wang et al., 2021). Moreover, the third type focuses on the use of structure: JK-Net(Xu et al., 2018), PPNP, APPNP(Klicpera et al., 2018), GWNN(Xu et al., 2019), GPRGNN(Chien et al., 2020) and ADAGCN(Sun et al., 2020). **Training.** In this part we present the data and parameter details used for the experiments. 1) For data usage, we employ three scales for the splitting of the data training set. The first method utilizes \(20\) training samples per cluster across all datasets; the results are displayed in Table 3. The second strategy employs \(10\) for AMAC, and \(15\) for the remaining datasets. Table 4 shows the results. And the third method uses \(7\) samples per cluster for all datasets. Table 5 details the outcomes. The validation set has 500 samples, whereas the test set contains the remaining samples. 2) For parameters using, a combination of parameters with lr of \(0.01,\mathit{topk}\) of \(10\), hidden size of 5000 and \(\lambda\) of \(1\) is used on all eight datasets. We set the appropriate number of layers for Cora-ML, CiteSeer, PubMed to \(12\), \(9\), \(11\), and other datasets to \(5\). For PubMed and MS-Academic, we set the dropout value to \(0.2\), and for the other datasets, we set it to \(0\). For PubMed and MS-Academic, we use 0.7 and 0.1, 0.5 and 3 for \(\gamma\) and \(\beta\), and 0.5 and 0.5 for other datasets. For AMAC, CiteSeer and Ms-Academic, we set weight decay to \(1e-6\), \(1e-3\) and \(1e-5\), and for the other datasets, we set it to \(1e-4\). Throughout the experiment, we use \(pytorch3\) for code development. We use an NVIDIA GeForce RTX 3080 card with 64GB of RAM and a 20-core CPU for all datasets. We compare model outcomes by tweaking baseline source code to best fit their original parameters. The validation set's best model predicts the test set. We select category accuracy as the statistic for evaluation, and 10 random seeds are used for all experimental findings. ### Performance Comparison As shown in Table 3, Table 4 and Table 5, we compare GANN with thirteen baselines at different labeling rates. From these results, we can infer the following conclusions. 1) First, in terms of the overall effect, our model GANN is able to maintain a stable accuracy and does not produce a large decrease in accuracy as the number of labels decreases. However, the accuracy of other models with superior performance, such as ADAGCN and GPRGNN, has decreased by \(5\%\) to \(13\%\). 2) GANN outperforms the majority of models in eight datasets, which indicates it can mine graph essential information and utilize graph attributes and structure information effectively. To further compare the model effects, we have selected a subset of models and datasets for visualization, as shown in Figure 12 and Figure 13. The illustration demonstrates that our proposed GANN model can generate high-quality node representations. \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Dataset** & **Nodes** & **Edges** & **Features** & **Classes** & **Label Rates** \\ \hline DBLP & 4507 & 7056 & 334 & 4 & 0.018 \\ ACM & 3025 & 26 256 & 1870 & 3 & 0.019 \\ AMAP & 7650 & 287 326 & 745 & 8 & 0.020 \\ AMAC & 13 752 & 491 722 & 767 & 10 & 0.014 \\ Cora-ML & 2810 & 7981 & 2879 & 7 & 0.047 \\ CiteSeer & 2110 & 3668 & 3703 & 6 & 0.036 \\ PubMed & 19 717 & 44 324 & 500 & 3 & 0.003 \\ MS-Academic & 18 333 & 81 894 & 6805 & 15 & 0.016 \\ \hline \hline \end{tabular} \end{table} Table 2: Dataset Statistics Figure 2: For experiments, only the attribute matrix is utilized. All-20 and All-10 indicate all input samples when the train size is 20 and 10; Train-20 and Train-10 indicate only 20 and 10 input samples. We give the average accuracy for ADAGCN run with 10 random seeds. ### Ablation Study In this section, we do verification experiments on the previously introduced feature alignment rule, cluster center alignment rule, and the objective function. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline **Model** & **DBLP** & **ACM** & **AMAP** & **AMAC** & **Cora-ML** & **CiteSeer** & **PubMed** & **MS-Academic** \\ \hline GCN & 76.39\(\pm\)1.69 & 89.38\(\pm\)1.53 & 79.65\(\pm\)0.97 & 54.32\(\pm\)0.98 & 79.96\(\pm\)0.86 & 69.34\(\pm\)0.76 & 71.83\(\pm\)1.46 & 89.30\(\pm\)1.12 \\ V.GCN & 75.95\(\pm\)1.08 & 90.12\(\pm\)1.24 & 78.72\(\pm\)1.24 & 54.38\(\pm\)0.76 & 79.75\(\pm\)0.62 & 69.36\(\pm\)0.65 & 71.57\(\pm\)1.46 & 90.04\(\pm\)0.83 \\ GCN-Cheby & 76.04\(\pm\)1.10 & 89.62\(\pm\)0.63 & 89.88\(\pm\)0.79 & 74.16\(\pm\)0.82 & 75.99\(\pm\)0.65 & 68.31\(\pm\)1.14 & 68.10\(\pm\)0.30 & OOM \\ GAT & 77.33\(\pm\)0.31 & 89.66\(\pm\)0.78 & 90.77\(\pm\)0.75 & 77.21\(\pm\)0.56 & 78.67\(\pm\)0.45 & 70.86\(\pm\)0.97 & 68.94\(\pm\)0.28 & 87.86\(\pm\)0.66 \\ FAGCN & 77.59\(\pm\)1.25 & 89.75\(\pm\)1.45 & 87.56\(\pm\)0.63 & 79.41\(\pm\)1.13 & 81.37\(\pm\)1.09 & 67.46\(\pm\)1.51 & 74.30\(\pm\)1.56 & 89.11\(\pm\)1.18 \\ SGC & 77.30\(\pm\)1.79 & 90.46\(\pm\)0.52 & 88.33\(\pm\)1.69 & 73.21\(\pm\)1.68 & 79.18\(\pm\)1.02 & 66.15\(\pm\)1.80 & 72.82\(\pm\)1.67 & 88.36\(\pm\)0.61 \\ MixupForGraph & 69.54\(\pm\)0.96 & 85.24\(\pm\)0.69 & 85.20\(\pm\)1.19 & 57.56\(\pm\)1.43 & 75.30\(\pm\)1.18 & 65.96\(\pm\)1.66 & 67.47\(\pm\)1.62 & 84.33\(\pm\)1.26 \\ JK-Net & 78.08\(\pm\)1.08 & 89.49\(\pm\)0.56 & 89.42\(\pm\)0.68 & 72.45\(\pm\)1.22 & 77.72\(\pm\)0.71 & 70.65\(\pm\)0.23 & 67.92\(\pm\)0.31 & 88.32\(\pm\)0.25 \\ PPNP & 79.91\(\pm\)1.32 & 90.90\(\pm\)0.40 & 83.26\(\pm\)1.59 & 64.89\(\pm\)0.95 & 81.01\(\pm\)1.47 & 72.38\(\pm\)1.05 & OOM & OOM \\ APPNP & 79.38\(\pm\)1.42 & 90.49\(\pm\)1.19 & 83.67\(\pm\)1.18 & 65.19\(\pm\)1.04 & 80.94\(\pm\)1.16 & 72.93\(\pm\)1.50 & 76.30\(\pm\)1.63 & 91.97\(\pm\)0.48 \\ GWNN & 75.53\(\pm\)1.07 & 90.45\(\pm\)0.59 & 88.92\(\pm\)1.22 & 74.32\(\pm\)0.78 & 80.83\(\pm\)0.18 & 70.63\(\pm\)0.93 & 74.50\(\pm\)0.82 & 89.84\(\pm\)0.32 \\ GPRGNN & **80.83\(\pm\)0.65** & 90.68\(\pm\)0.19 & 91.32\(\pm\)0.37 & 79.49\(\pm\)0.77 & 83.18\(\pm\)0.63 & 73.60\(\pm\)0.30 & 69.58\(\pm\)0.19 & 88.90\(\pm\)0.22 \\ ADAGEN & 77.61\(\pm\)1.78 & 29.44\(\pm\)1.58 & 84.73\(\pm\)0.50 & 59.48\(\pm\)1.32 & 82.08\(\pm\)0.99 & 72.95\(\pm\)1.38 & 74.17\(\pm\)1.11 & 91.81\(\pm\)0.49 \\ \hline GANN(Ours) & 80.72\(\pm\)0.64 & **92.83\(\pm\)0.63** & **91.71\(\pm\)0.57** & **80.02\(\pm\)0.68** & **85.12\(\pm\)0.81** & **74.33\(\pm\)0.75** & **78.14\(\pm\)0.26** & **92.79\(\pm\)0.37** \\ \hline \hline \end{tabular} \end{table} Table 4: This is the result of using \(10\) samples per cluster for AMAP and \(15\) for others. We evaluate the node classification tasks using the average accuracy(\(\%\)) and standard deviation as a criterion. Experiments use \(10\) random seeds. OOM denotes “out of memory”. Bolded results are optimal, and underlined results are suboptimal. \begin{table} \begin{tabular}{c c c c c c c c c} \hline **Model** & **DBLP** & **ACM** & **AMAP** & **AMAC** & **Cora-ML** & **CiteSeer** & **PubMed** & **MS-Academic** \\ \hline GCN & 78.79\(\pm\)0.54 & 91.85\(\pm\)0.40 & 81.15\(\pm\)1.05 & 58.96\(\pm\)0.96 & 82.13\(\pm\)0.72 & 74.42\(\pm\)0.73 & 76.89\(\pm\)0.54 & 92.01\(\pm\)0.08 \\ V.GCN & 78.32\(\pm\)0.78 & 91.06\(\pm\)0.44 & 81.41\(\pm\)1.47 & 58.71\(\pm\)0.76 & 82.61\(\pm\)0.76 & 74.07\(\pm\)0.69 & 76.70\(\pm\)0.63 & 91.83\(\pm\)0.16 \\ GCN-Cheby & 80.33\(\pm\)0.97 & 91.01\(\pm\)0.56 & 91.56\(\pm\)0.75 & 77.31\(\pm\)1.18 & 82.97\(\pm\)0.67 & 72.21\(\pm\)0.54 & 75.01\(\pm\)0.74 & OOM \\ GAT & 79.97\(\pm\)0.58 & 90.16\(\pm\)0.52 & 92.38\(\pm\)0.14 & 79.04\(\pm\)0.58 & 83.53\(\pm\)0.16 & 72.17\(\pm\)0.73 & 77.85\(\pm\)0.26 & 89.47\(\pm\)0.20 \\ FAGCN & 79.98\(\pm\)0.89 & 90.38\(\pm\)0.86 & 90.06\(\pm\)1.04 & 80.41\(\pm\)0.62 & 83.72\(\pm\)0.64 & 71.97\(\pm\)0.15 & 77.00\(\pm\)0.84 & 90.53\(\pm\)0.20 \\ SGC & 79.27\(\pm\)1.07 & 91.75\(\pm\)0.48 & 90.17\(\pm\)0.75 & 77.48\(\pm\)0.79 & 83.65\(\pm\)1.16 & 73.16\(\pm\)1.07 & 79.30\(\pm\)0.60 & 89.79\(\pm\)0.83 \\ MixupForGraph & 74.29\(\pm\)0.71 & 89.33\(\pm\)0.59 & 87.28\(\pm\)1.08 & 59.65\(\pm\)0.71 & 71.78\(\pm\)0.74 & 68.68\(\pm\)0.91 & 72.70\(\pm\)0.40 & 85.59\(\pm\)1.26 \\ JK-Net & 79.62\(\pm\)0.58 & 90.56\(\pm\)0.56 & 91.68\(\pm\)0.82 & 75.95\(\pm\)1.07 & 82.93\(\pm\)0.38 & 72.78\(\pm\)0.44 & 76.54\(\pm\)0.57 & 89.08\(\pm\)0.05 \\ PP Analysis of the Feature Alignment RuleADAGCN's inefficient graph data exploitation is mentioned earlier. We execute a two-part experiment to illustrate that our feature alignment rule can utilize graph attributes. First, we show that only training set attributes can be utilized in the initial layer, regardless of whether the input is the whole attribute matrix. As demonstrated in Figure 2, the accuracy of using all samples is virtually the same as using only the training set in ADAGCN. Also, considering the number of the training set, it's plausible to believe that adding more labeled data enhances model outcomes. Then, we compare ADAGCN and GANN's first-layer testing accuracy to prove GANN's efficiency on Cora-ML and CiteSeer. Figure 3 shows the results. The attribute information can be completely exploited during the training of GANN's first layer. Analysis of the Cluster Center Alignment RuleAs indicated in ADAGCN limitations, the node representation created at deeper training layers tends to be oversmoothed. The explanation for this is that the outcomes of the adjacency matrix's higher order are no longer sparse. Figure 4 shows that after several adjacency matrix multiplications, it is fully connected. Here in Figure 5, We display testing accuracy for varying numbers of layers, validation set accuracy, and loss curves for varying numbers of iterations on Cora-ML. Our results show that GANN improves deep layer training precision. Analysis of the Objective FunctionIn this subsection, we validate the loss in Eq.(17) by evaluating the effects of three rules' objective functions on the model's output. We perform experiments on the Cora-ML and CiteSeer datasets and report the model's test accuracy and the standard deviation error lines. From Figure 7, it is clear that the effect of the model using the feature alignment rule alone is comparable to the final performance of GANN. However, when the other two rules are used alone, the inac Figure 4: From left to right, the one-hop to five-hop densities of the adjacency matrix are shown in order. Figure 5: Experiments are conducted on Cora-ML, and from left to right, the testing accuracy for various layers, validation set accuracy, and loss curve comparison plots are displayed. Figure 3: The accuracy of the test set with \(10\) random seeds is reported. Figure 6: For the sensitivity analysis of \(\beta\) and \(\gamma\), we give the average testing accuracy with \(10\) random seeds and display the error lines using standard deviations. Figure 7: We report the average testing accuracy with 5 random seeds. Types 1, 2, 3, and 4 respectively indicate the implementation of the feature alignment rule, cluster center alignment rule, minimum entropy alignment rule, and GANN model. curacy of the results is substantially bigger. But the best results achieved by the models corresponding to the three rules do not differ much. This highlights the necessity of considering the addition of regularization terms while individually applying rules. ### Sensitivity Analysis In order to conduct an in-depth investigation of the model's parameters and structure, we conducted an experimental analysis of the model's key parameters and MLP structure in this subsection. Parameters AnalysisWe investigate four critical model training parameters: the minimum entropy alignment rule loss factor \(\gamma\), the feature alignment rule loss factor \(\beta\), the cluster center alignment noise factor \(\lambda\), and \(topk\). We analyze \(\gamma\) and \(\beta\) on Cora-ML and CiteSeer with \(0.3\) to \(1.0\) parameter values. As shown in Figure 6, the optimal parameter values can be determined. Also, we compare \(topk\) and \(\lambda\) parameter experiments in the ranges \([1,10,100,1000]\). As shown in Figure 8, both parameters do not significantly affect the model's outcomes. We think the cause may be data standardization. Due to the fact that these two parameters are unconnected to the MLP layer that generates the embedding, parameter alterations have little effect on the final output. We also evaluate the MLP layer. Different topologies may result in substantial experimental variation. Model Structural AnalysisAs indicated, biasing the MLP layer affects the model output. To investigate the influence of bias on model generation, we introduce bias into the two MLP layers of GANN. We used \(10\) random seeds to conduct experiments on four datasets and we set types \(1\), \(2\), \(3\), and \(4\) to signify applying bias in the first and second layers, the first layer alone, the second layer only, and no bias. Figure 9 shows that adding bias to the first MLP layer has minimal influence on the model but helps the Cora-ML dataset. However, introducing bias to the second MLP layer causes a major deviation in the model result, even though the bias is initialized to zero. This is because the second MLP layer classifies the node embedding. Adding bias here will affect the final classification probability and model training negatively. ### Complexity Analysis In this section, we conduct analytical experiments on the model's time and space needs. The results for time consumption are shown in Figure 10. We run experiments utilizing 10 baselines on 5 datasets, each with 1000 epochs under the same conditions. We find that the time of GANN dominates the existing state-of-the-art methods. We analyze the root causes as follows: 1) Instead of using the inefficient and time-consuming GCN structure for data feature extraction, we directly multiply the adjacency matrix with the feature matrix as input, compute it just once, and store it for reuse. This significantly decreases the time required. 2) Two layers of simple MLP structure make the model structure simple and efficient to train. The GPU memory consumption results are shown in Figure 11. We employ eight baselines for trials on five datasets Figure 8: Parameter sensitivity analysis of \(topk\) and \(\lambda\) in different datasets. Figure 10: The amount of time required by various models to complete one epoch of training under identical conditions. All model layers have two layers, and hidden layers have \(128\) nodes. Figure 9: We show the testing accuracy scatter plots for the bias modification of the MLP layer. and record the maximum memory used for training models under the same conditions. According to the results, Our model GANN consumes GPU similarly to others, especially ADAGCN. Analysis: 1) GANN uses an incredibly simplistic model structure. 2) GANN iteratively trains between layers, removing the memory cost of layer extension. ### Visualization Experiments In this section, we compare the models visualized from two different angles to further demonstrate GANN's superiority. We select seven models, including GCN, FAGCN, MixupForGraph, PPNP, GWNN, ADAGCN, and GANN, on ACM and DBLP datasets. 1) To show the cluster division of node representations, we display the visualization output by \(t\)-SNE algorithm(Van der Maaten and Hinton, 2008) in Figure 12. From the results, it can be seen that the plots generated by GCN, PPNP and GWNN, the boundaries between categories are not obvious and the clusters are blended together, indicating that the quality of node representation is not high. It's obvious that our proposed GANN can improve the quality of node representations by effectively distancing the categories from each other, and then better classify node. 2) To visualize the oversmoothing issue in the graph node classification task, we generate heat maps of the sample similarity matrices corresponding to node representations. All samples are sorted by category so that examples from the same cluster are near to one another. Figure 13 demonstrates that the oversmoothing issue is particularly severe in GCN and ADAGCN. Our proposed GANN is capable of learning more unique node representations, which significantly alleviates the problem of oversmoothing. ## 5 Conclusion In this paper, we study the semi-supervised problem on graphs and offer GANN, a simple and efficient framework. In GANN, we propose three alignment rules: feature alignment rule, cluster center alignment rule and minimum entropy alignment rule. We perform ablation experiments and are able to demonstrate well that GANN can fully exploit the feature and higher-order neighbor information of the data to generate more discriminative node representations, and alleviate the problem of oversmoothing. On eight datasets, we demonstrate that GANN outperforms thirteen other relevant models and superior in time and space to most models. In particular, when using a very small labeling rate, our model guarantees the best results when using only \(7\) labeled samples per cluster. In conclusion, the simple and successful principles given in GANN may serve as a paradigm for future GNN models, particularly those using semi-supervised learning. By augmenting the data, we intend to further enhance the scalability of GANN in future work. Also we will consider adding decoders in the second half of the model to try the effect of other GNN models combined with MLP. Equally essential work is the processing of large-scale datasets, and we will include this part of the data in the code details and check how well the model works. Figure 11: GPU memory consumption of the models during training. All models use two layers and use \(128\) hidden layer nodes. Figure 12: \(t\)-SNE visualization of seven algorithms on two benchmark datasets. The first row corresponds to the ACM dataset, and the second row relates to the DBLP dataset. Figure 13: Visualization of sample similarity matrices on two datasets. The first row and second row correspond to ACM and DBLP, respectively.
2302.01961
Asymmetric Certified Robustness via Feature-Convex Neural Networks
Recent works have introduced input-convex neural networks (ICNNs) as learning models with advantageous training, inference, and generalization properties linked to their convex structure. In this paper, we propose a novel feature-convex neural network architecture as the composition of an ICNN with a Lipschitz feature map in order to achieve adversarial robustness. We consider the asymmetric binary classification setting with one "sensitive" class, and for this class we prove deterministic, closed-form, and easily-computable certified robust radii for arbitrary $\ell_p$-norms. We theoretically justify the use of these models by characterizing their decision region geometry, extending the universal approximation theorem for ICNN regression to the classification setting, and proving a lower bound on the probability that such models perfectly fit even unstructured uniformly distributed data in sufficiently high dimensions. Experiments on Malimg malware classification and subsets of MNIST, Fashion-MNIST, and CIFAR-10 datasets show that feature-convex classifiers attain state-of-the-art certified $\ell_1$-radii as well as substantial $\ell_2$- and $\ell_{\infty}$-radii while being far more computationally efficient than any competitive baseline.
Samuel Pfrommer, Brendon G. Anderson, Julien Piet, Somayeh Sojoudi
2023-02-03T19:17:28Z
http://arxiv.org/abs/2302.01961v2
# Asymmetric Certified Robustness via Feature-Convex Neural Networks ###### Abstract Recent works have introduced input-convex neural networks (ICNNs) as learning models with advantageous training, inference, and generalization properties linked to their convex structure. In this paper, we propose a novel _feature-convex neural network_ architecture as the composition of an ICNN with a Lipschitz feature map in order to achieve adversarial robustness. We consider the asymmetric binary classification setting with one "sensitive" class, and for this class we prove deterministic, closed-form, and easily-computable certified robust radii for arbitrary \(\ell_{p}\)-norms. We theoretically justify the use of these models by characterizing their decision region geometry, extending the universal approximation theorem for ICNN regression to the classification setting, and proving a lower bound on the probability that such models perfectly fit even unstructured uniformly distributed data in sufficiently high dimensions. Experiments on Malimg malware classification and subsets of MNIST, Fashion-MNIST, and CIFAR-10 datasets show that feature-convex classifiers attain state-of-the-art certified \(\ell_{1}\)-radii as well as substantial \(\ell_{2}\)- and \(\ell_{\infty}\)-radii while being far more computationally efficient than any competitive baseline. 1 Footnote 1: Code for reproducing our results is available on GitHub. ## 1 Introduction Although neural networks achieve state-of-the-art performance across a range of machine learning tasks, researchers have shown that they can be highly sensitive to adversarial inputs that are maliciously designed to fool the model (Biggio et al., 2013; Szegedy et al., 2014; Nguyen et al., 2015). For example, the works Eykholt et al. (2018) and Liu et al. (2019) show that small physical and digital alterations of vehicle traffic signs can cause image classifiers to fail. In safety-critical applications of neural networks, such as autonomous driving (Bojarski et al., 2016; Wu et al., 2017) and medical diagnostics (Amato et al., 2013; Yadav & Jadhav, 2019), this sensitivity to adversarial inputs is clearly unacceptable. A line of heuristic defenses against adversarial inputs has been proposed, only to be defeated by stronger attack methods (Carlini & Wagner, 2017; Kurakin et al., 2017; Athalye et al., 2018; Uesato et al., 2018; Madry et al., 2018). This has led researchers to develop certifiably robust methods that provide a provable guarantee of safe performance. The strength of such certificates can be highly dependent on network architecture; general off-the-shelf models tend to have large Lipschitz constants, leading to loose Lipschitz-based robustness guarantees (Hein & Andriushchenko, 2017; Fazlyab et al., 2019; Yang et al., 2020). Consequently, lines of work that impose certificate-amenable structures onto networks have been popularized, e.g., specialized model layers (Trockman & Kolter, 2021; Zhang et al., 2021), randomized smoothing-based networks (Li et al., 2019; Cohen et al., 2019; Zhai et al., 2020; Yang et al., 2020; Anderson & Sojoudi, 2022), and \(\mathrm{ReLU}\) networks that are certified using convex optimization and mixed-integer programming (Wong & Kolter, 2018; Weng et al., 2018; Raghunathan et al., 2018; Anderson et al., 2020; Ma & Sojoudi, 2021). The first category only directly certifies against one specific choice of norm, producing poorly scaled radii for other norms in high dimensions. The latter two method families incur serious computational challenges: randomized smoothing typically requires the classification of thousands of randomly perturbed samples per input, while optimization-based solutions scale poorly to large networks. Despite the moderate success of these certifiable classifiers, conventional assumptions in the literature are unnecessarily restrictive for most practical adversarial settings. Specifically, most works consider a multiclass setting where certificates are desired for inputs of any class. By contrast, many real-world adversarial attacks involve a binary setting with only one _sensitive class_ that must be made robust to adversarial perturbations. Consider the representative problem of spam classification; a malicious adversary crafting a spam email will always attempt to fool the classifier toward the "not-spam" class--never conversely (Kuchipudi et al., 2020). Similar logic applies for a range of applications, including malware detection (Grosse et al., 2017), mali cious network traffic filtering (Sadeghzadeh et al., 2021), fake news and social media bot detection (Cresci et al., 2021), hate speech removal (Grolman et al., 2022), insurance claims filtering (Finlayson et al., 2019), and financial fraud detection (Cartella et al., 2021). These applications motivate us to introduce a narrower, asymmetric robustness problem and develop a novel classifier architecture to address this challenge. ### Problem Statement and Contributions This work considers the problem of _asymmetric robustness certification_. Specifically, we assume a classification setting wherein one class is "sensitive" and seek to certify that, if some input is classified into this sensitive class, then adversarial perturbations of sufficiently small magnitude cannot change the prediction. To tackle the asymmetric robustness certification problem and attain state-of-the-art certified radii, we propose _feature-convex neural networks_, and achieve the following contributions in doing so: 1. We provide easily-computable class \(1\) certified robust radii for feature-convex classifiers with respect to arbitrary \(\ell_{p}\)-norms. 2. We characterize the decision region geometry of feature-convex classifiers, extend the universal approximation theorem for input-convex \(\mathrm{ReLU}\) neural networks to the classification setting, and show that, in high dimensions, feature-convex classifiers can perfectly fit even unstructured, uniformly distributed datasets. 3. We evaluate against several baselines on MNIST 3-8 (LeCun, 1998), Malimg malware classification (Nataraj et al., 2011), Fashion-MNIST shirts (Xiao et al., 2017), and CIFAR-10 cats-dogs (Krizhevsky et al., 2009), and show that our classifiers yield state-of-the-art certified robust radii. ### Related Works Certified adversarial robustness.Three of the most popular approaches for generating robustness certificates are Lipschitz-based bounds, randomized smoothing, and convex optimization. Successfully bounding the Lipschitz constant of a neural network can give rise to an efficient certified radius of robustness, e.g., via the methods proposed in Hein & Andriushchenko (2017). However, in practice such Lipschitz constants are too large to yield meaningful certificates, or it is computationally burdensome to compute or bound the Lipschitz constants in the first place (Virmaux & Scaman, 2018; Fazlyab et al., 2019; Yang et al., 2020). To overcome these computational limitations, certain methods impose special structures on their model layers to provide immediate Lipschitz guarantees. Specifically, Trockman & Kolter (2021) uses the Cayley transform to derive convolutional layers with immediate \(\ell_{2}\)-Lipschitz constants, and Zhang et al. (2021) introduces a \(\ell_{\infty}\)-distance neuron that provides similar Lipschitz guarantees with respect to the \(\ell_{\infty}\)-norm. We compare with both these approaches in our experiments. Randomized smoothing, popularized by Lecuyer et al. (2019); Li et al. (2019); Cohen et al. (2019), uses the expected prediction of a model when subjected to Gaussian input noise. These works derive \(\ell_{2}\)-norm balls around inputs on which the smoothed classifier remains constant, but suffer from nondeterminism and high computational burden. Follow-up works generalize randomized smoothing to certify input regions defined by different metrics, e.g., Wasserstein, \(\ell_{1}\)-, and \(\ell_{\infty}\)-norms (Levine & Feizi, 2020; Teng et al., 2020; Yang et al., 2020). Other works focus on enlarging the certified regions by optimizing the smoothing distribution (Zhai et al., 2020; Eiras et al., 2021; Anderson et al., 2022), incorporating adversarial training into the base classifier (Salman et al., 2019; Zhang et al., 2020), and employing dimensionality reduction at the input (Pfrommer et al., 2022). Convex optimization-based certificates seek to derive a convex over-approximation of the set of possible outputs when the input is subject to adversarial perturbations, and show that this over-approximation is safe. Various over-approximations have been proposed, e.g., based on linear programming and bounding (Wong & Kolter, 2018; Weng et al., 2018), semidefinite programming (Raghunathan et al., 2018), and branch-and-bound approaches (Anderson et al., 2020; Ma & Sojoudi, 2021; Wang et al., 2021). The \(\alpha,\beta\)-CROWN method (Wang et al., 2021) uses an efficient bound propagation to linearly bound the neural network output in conjunction with a per-neuron branching heuristic to achieve state-of-the-art certified radii, winning both the 2021 and the 2022 VNN certification competitions (Bak et al., 2021; Muller et al., 2022). In contrast to these optimization-based methods, our approach in this paper is to directly exploit the convex structure of input-convex neural networks to derive closed-form robustness certificates for our proposed architecture, altogether avoiding the common efficiency-tightness tradeoffs of prior methods, which we find to compete with and even outperform the state-of-the-art \(\alpha,\beta\)-CROWN in several settings. Input-convex neural networks.Input-convex neural networks, popularized by Amos et al. (2017), are a class of parameterized models whose input-output mapping is convex (in at least a subset of the input variables). In Amos et al. (2017), the authors develop tractable methods to learn an input-convex neural network \(f\colon\mathbb{R}^{d}\times\mathbb{R}^{n}\to\mathbb{R}\) and show that utilizing it for the convex optimization-based inference \(x\mapsto\arg\min_{y\in\mathbb{R}^{n}}f(x,y)\) yields state-of-the-art results in a variety of domains. Subsequent works propose novel applications of input-convex neural networks in areas such as optimal control and reinforcement learning (Chen et al., 2019; Zeng et al., 2022), optimal transport (Makkuva et al., 2020), and optimal power flow (Chen et al., 2020; Zhang et al., 2021). Other works have generalized input-convex networks to input-invex networks (Nesterov et al., 2022) and global optimization networks (Zhao et al., 2022) so as to maintain the benign optimization properties of input-convexity. The authors of Siahkamari et al. (2022) present algorithms for efficiently learning convex functions, while Chen et al. (2019); Kim and Kim (2022) derive universal approximation theorems for input-convex neural networks in the convex regression setting. The work Sivaprasad et al. (2021) shows that input-convex neural networks do not suffer from overfitting, and generalize better than multilayer perceptrons on common benchmark datasets. In this work, we incorporate input-convex neural networks as a part of our overall feature-convex architecture, and we leverage convexity properties to derive our novel robustness guarantees. ### Notations The sets of natural numbers and real numbers are denoted by \(\mathbb{N}\) and \(\mathbb{R}\), respectively. The \(d\times d\) identity matrix is written as \(I_{d}\in\mathbb{R}^{d\times d}\), and the identity map on \(\mathbb{R}^{d}\) is denoted by \(\operatorname{Id}\colon x\mapsto x\). For \(A\in\mathbb{R}^{n\times d}\), we define \(|A|\in\mathbb{R}^{n\times d}\) by \(|A|_{ij}=|A_{ij}|\) for all \(i,j\), and we write \(A\geq 0\) if and only if \(A_{ij}\geq 0\) for all \(i,j\). The \(\ell_{p}\)-norm on \(\mathbb{R}^{d}\) is given by \(\|\cdot\|_{p}\colon x\mapsto(|x_{1}|^{p}+\cdots+|x_{d}|^{p})^{1/p}\) for \(p\in[1,\infty)\) and by \(\|\cdot\|_{p}\colon x\mapsto\max\{|x_{1}|,\ldots,|x_{d}|\}\) for \(p=\infty\). The dual norm of \(\|\cdot\|_{p}\) is denoted by \(\|\cdot\|_{p,*}\). The convex hull of a set \(X\subseteq\mathbb{R}^{d}\) is denoted by \(\operatorname{conv}(X)\). The subdifferential of a convex function \(g\colon\mathbb{R}^{d}\to\mathbb{R}\) at \(x\in\mathbb{R}^{d}\) is denoted by \(\partial g(x)\). If \(\epsilon\colon\Omega\to\mathbb{R}^{d}\) is a random variable on a probability space \((\Omega,\mathcal{B},\mathbb{P})\) and \(P\) is a predicate defined on \(\mathbb{R}^{d}\), then we write \(\mathbb{P}(P(\epsilon))\) to mean \(\mathbb{P}(\{\omega\in\Omega:P(\epsilon(\omega))\})\). Lebesgue measure on \(\mathbb{R}^{d}\) is denoted by \(m\). We define \(\operatorname{ReLU}\colon\mathbb{R}\to\mathbb{R}\) as \(\operatorname{ReLU}(x)=\max\{0,x\}\), and if \(x\in\mathbb{R}^{d}\), \(\operatorname{ReLU}(x)\) denotes \((\operatorname{ReLU}(x_{1}),\ldots,\operatorname{ReLU}(x_{d}))\). For a function \(\varphi\colon\mathbb{R}^{d}\to\mathbb{R}^{q}\) and \(p\in[1,\infty]\), we define \(\operatorname{Lip}_{p}(\varphi)=\inf\{K\geq 0\ :\ \|\varphi(x)-\varphi(x^{\prime})\|_{p} \leq K\|x-x^{\prime}\|_{p}\text{ for all }x,x^{\prime}\in\mathbb{R}^{d}\}\), and if \(\operatorname{Lip}_{p}(\varphi)<\infty\) we say that \(\varphi\) is Lipschitz continuous with constant \(\operatorname{Lip}_{p}(\varphi)\) (with respect to the \(\ell_{p}\)-norm). ## 2 Feature-Convex Classifiers Let \(d,q\in\mathbb{N}\) and \(p\in[1,\infty]\) be fixed, and consider the task of classifying inputs from a subset of \(\mathbb{R}^{d}\) into a fixed set of classes \(\mathcal{Y}\subseteq\mathbb{N}\). In what follows, we restrict to the binary setting where \(\mathcal{Y}=\{1,2\}\) and class \(1\) is the sensitive class for which we desire robustness certificates (Section 1). In Appendix A, we briefly discuss avenues to generalize our framework to multiclass settings using one-versus-all and sequential classification methodologies and provide a proof-of-concept example for the Malimg dataset. We now formally define the classifiers considered in this work. **Definition 2.1**.: Let \(f\colon\mathbb{R}^{d}\to\{1,2\}\) be defined by \[f(x)=\begin{cases}1&\text{if }g(\varphi(x))>0,\\ 2&\text{if }g(\varphi(x))\leq 0,\end{cases}\] for some \(\varphi\colon\mathbb{R}^{d}\to\mathbb{R}^{q}\) and some \(g\colon\mathbb{R}^{q}\to\mathbb{R}\). Then \(f\) is said to be a _feature-convex classifier_ if the _feature map_\(\varphi\) is Lipschitz continuous with constant \(\operatorname{Lip}_{p}(\varphi)<\infty\) and \(g\) is a convex function. We denote the class of all feature-convex classifiers by \(\mathcal{F}\). Furthermore, for \(q=d\), the subclass of all feature-convex classifiers with \(\varphi=\operatorname{Id}\) is denoted by \(\mathcal{F}_{\operatorname{Id}}\). As we will see in Section 3.1, defining our classifiers using the composition of a convex classifier with a Lipschitz feature map enables the fast computation of certified regions in the input space. This naturally arises from the global underestimation of convex functions by first-order Taylor approximations. Since sublevel sets of such \(g\) are restricted to be convex, the feature map \(\varphi\) is included to increase the representation power and practical performance of our architecture (see Appendix B for a motivating example). In practice, we find that it suffices to choose \(\varphi\) to be a simple map with a small closed-form Lipschitz constant. For example, in our experiments that follow with \(q=2d\), we choose \(\varphi(x)=(x-\mu,|x-\mu|)\) with a constant channel-wise dataset mean \(\mu\), yielding \(\operatorname{Lip}_{1}(\varphi)\leq 2\), \(\operatorname{Lip}_{2}(\varphi)\leq\sqrt{2}\), and \(\operatorname{Lip}_{\infty}(\varphi)\leq 1\). Although this particular choice of \(\varphi\) is convex, the function \(g\) need not be monotone, and therefore the composition \(g\circ\varphi\) is nonconvex in general. The prediction and certification of feature-convex classifiers are illustrated in Figure 1. In practice, we implement feature-convex classifiers using parameterizations of \(g\), which we now make explicit. Following Amos et al. (2017), we instantiate \(g\) as a neural network with nonnegative weight matrices and nondecreasing convex nonlinearities. Specifically, we consider \(\operatorname{ReLU}\) nonlinearities, which is not restrictive, as our universal approximation result in Theorem 3.6 proves. **Definition 2.2**.: A _feature-convex \(\operatorname{ReLU}\) neural network_ is a function \(\hat{f}\colon\mathbb{R}^{d}\to\{1,2\}\) defined by \[\hat{f}(x)=\begin{cases}1&\text{if }\hat{g}(\varphi(x))>0,\\ 2&\text{if }\hat{g}(\varphi(x))\leq 0,\end{cases}\] with \(\varphi\colon\mathbb{R}^{d}\to\mathbb{R}^{q}\) Lipschitz continuous with constant \(\operatorname{Lip}_{p}(\varphi)<\infty\) and \(\hat{g}\colon\mathbb{R}^{q}\to\mathbb{R}\) defined by \[x^{(1)} =\operatorname{ReLU}\left(A^{(1)}x^{(0)}+b^{(1)}\right),\] \[x^{(l)} =\operatorname{ReLU}\left(A^{(l)}x^{(l-1)}+b^{(l)}+C^{(l)}x^{(0) }\right),\] \[\hat{g}(x^{(0)}) =A^{(L)}x^{(L-1)}+b^{(L)}+C^{(L)}x^{(0)},\] for all \(l\in\{2,3,\ldots,L-1\}\) for some \(L\in\mathbb{N}\), \(L>1\), and for some consistently sized matrices \(A^{(l)},C^{(l)}\) and vectors \(b^{(l)}\) satisfying \(A^{(l)}\geq 0\) for all \(l\in\{2,3,\ldots,L\}\). Going forward, we denote the class of all feature-convex \(\operatorname{ReLU}\) neural networks by \(\hat{\mathcal{F}}\). Furthermore, if \(q=d\), the subclass of all feature-convex \(\operatorname{ReLU}\) neural networks with \(\varphi=\operatorname{Id}\) is denoted by \(\hat{\mathcal{F}}_{\operatorname{Id}}\), which corresponds to the input-convex \(\operatorname{ReLU}\) neural networks proposed in Amos et al. (2017). For every \(\hat{f}\in\hat{\mathcal{F}}\), it holds that \(\hat{g}\) is a convex function due to the rules for composition and nonnegatively weighted sums of convex functions (Boyd & Vandenberghe, 2004, Section 3.2), and therefore \(\hat{\mathcal{F}}\subseteq\mathcal{F}\) and \(\hat{\mathcal{F}}_{\operatorname{Id}}\subseteq\mathcal{F}_{\operatorname{Id}}\). The "passth through" weights \(C^{(l)}\) were originally included by Amos et al. (2017) to improve the practical performance of the architecture. In some of our more challenging experiments that follow, we remove these passthrough operations and instead add residual identity mappings between hidden layers, which also preserves convexity. We note that the transformations defined by \(A^{(l)}\) and \(C^{(l)}\) can be taken to be convolutions, which are nonnegatively weighted linear operations and thus preserve convexity (Amos et al., 2017). ## 3 Certification and Analysis of Feature-Convex Classifiers We begin by deriving asymmetric robustness certificates for our feature-convex classifier in Section 3.1. In Section 3.2, we introduce convexly separable sets and theoretically analyze the clean performance of our classifiers through this lens. Namely, we show that there exists a feature-convex classifier with \(\varphi=\operatorname{Id}\) that perfectly classifies the CIFAR-10 cats-dogs training dataset. We show that this strong learning capacity generalizes by proving that feature-convex classifiers can perfectly fit high-dimensional uniformly distributed data with high probability. Proofs are deferred to the appendices. ### Certified Robustness Guarantees In this section, we address the asymmetric certified robustness problem by providing class \(1\) robustness certificates for feature-convex classifiers \(f\in\mathcal{F}\). Such robustness corresponds to proving the absence of false negatives in the case that class \(1\) represents positives and class \(2\) represents negatives. For example, if in a malware detection setting class \(1\) represents malware and class \(2\) represents non-malware, the following certificate gives a lower bound on the magnitude of the malware file alteration needed in order to misclassify the file as non-malware. **Theorem 3.1**.: _Let \(f\in\mathcal{F}\) be as in Definition 2.1 and let \(x\in f^{-1}(\{1\})=\{x^{\prime}\in\mathbb{R}^{d}:f(x^{\prime})=1\}\). If \(\nabla g(\varphi(x))\in\mathbb{R}^{q}\) is a nonzero subgradient of the convex function \(g\) at \(\varphi(x)\), then \(f(x+\delta)=1\) for all \(\delta\in\mathbb{R}^{d}\) such that_ \[\|\delta\|_{p}<r(x)\coloneqq\frac{g(\varphi(x))}{\operatorname{Lip}_{p}( \varphi)\|\nabla g(\varphi(x))\|_{p,*}}.\] _Remark 3.2_.: For \(f\in\mathcal{F}\) and \(x\in f^{-1}(\{1\})\), a subgradient \(\nabla g(\varphi(x))\in\mathbb{R}^{q}\) of \(g\) always exists at \(\varphi(x)\), since the subdifferential \(\partial g(\varphi(x))\) is a nonempty closed bounded convex set, as \(g\) is a finite convex function on all of \(\mathbb{R}^{q}\)--see Theorem 23.4 in Rockafellar (1970) and the discussion thereafter. Furthermore, if \(f\) is not a constant classifier, such a subgradient \(\nabla g(\varphi(x))\) must necessarily be nonzero, since, if it were zero, then \(g(y)\geq g(\varphi(x))+\nabla g(\varphi(x))^{\top}(y-\varphi(x))=g(\varphi(x))>0\) for all \(y\in\mathbb{R}^{q}\), implying that \(f\) identically predicts class \(1\), which is a contradiction. Thus, the certified radius given in Theorem 3.1 is always well-defined in practical settings. Figure 1: Illustration of feature-convex classifiers and their robustness certification. Since \(g\) is convex, it can be globally underapproximated by its tangent plane at \(\varphi(x)\), yielding certified sets for all norm balls in the higher-dimensional feature space. Lipschitzness of \(\varphi\) then yields appropriately scaled certificates in the original input space. Theorem 3.1 is derived from the fact that a convex function is globally underapproximated by any tangent plane. The nonconstant terms in Theorem 3.1 afford an intuitive interpretation: the radius scales proportionally to the confidence \(g(\varphi(x))\) and inversely with the input sensitivity \(\|\nabla g(\varphi(x))\|_{p,*}\). In practice, \(\operatorname{Lip}_{p}(\varphi)\) can be made quite small as mentioned in Section 2, and furthermore the subgradient \(\nabla g(\varphi(x))\) is easily evaluated as the Jacobian of \(g\) at \(\varphi(x)\) using standard automatic differentiation packages. This provides fast, deterministic class \(1\) certificates for any \(\ell_{p}\)-norm without modification of the feature-convex network's training procedure or architecture. ### Representation Power Characterization We now restrict our analysis to the class \(\mathcal{F}_{\mathrm{ld}}\) of feature-convex classifiers with an identity feature map. This can be equivalently considered as the class of classifiers for which the input-to-logit map is convex. We therefore refer to models in \(\mathcal{F}_{\mathrm{ld}}\) as _input-convex classifiers_. While the feature map \(\varphi\) is useful in boosting the practical performance of our classifiers, the theoretical results in this section suggest that there is significant potential in using input-convex classifiers as a standalone solution. **Classifying convexly separable sets.** We begin by introducing the notion of convexly separable sets, which are intimately related to decision regions representable by the class \(\mathcal{F}_{\mathrm{Id}}\). **Definition 3.3**.: Let \(X_{1},X_{2}\subseteq\mathbb{R}^{d}\). The ordered pair \((X_{1},X_{2})\) is said to be _convexly separable_ if there exists a nonempty closed convex set \(X\subseteq\mathbb{R}^{d}\) such that \(X_{2}\subseteq X\) and \(X_{1}\subseteq\mathbb{R}^{d}\setminus X\). Notice that it may be the case that a pair \((X_{1},X_{2})\) is convexly separable yet the pair \((X_{2},X_{1})\) is not. Although low-dimensional intuition may cause concerns regarding the convex separability of sets of binary-labeled data, we will soon see in Theorem 3.9 that, even for relatively unstructured data distributions, binary datasets are actually convexly separable in high dimensions with high probability. We now show that convexly separable datasets possess the property that they may always be perfectly fit by input-convex classifiers. **Proposition 3.4**.: _For any nonempty closed convex set \(X\subseteq\mathbb{R}^{d}\), there exists \(f\in\mathcal{F}_{\mathrm{ld}}\) such that \(X=f^{-1}(\{2\})=\{x\in\mathbb{R}^{d}:f(x)=2\}\). In particular, this shows that if \((X_{1},X_{2})\) is a convexly separable pair of subsets of \(\mathbb{R}^{d}\), then there exists \(f\in\mathcal{F}_{\mathrm{ld}}\) such that \(f(x)=1\) for all \(x\in X_{1}\) and \(f(x)=2\) for all \(x\in X_{2}\)._ We also show that the converse of Proposition 3.4 holds: the geometry of the decision regions of classifiers in \(\mathcal{F}_{\mathrm{ld}}\) consists of a convex set and its complement. **Proposition 3.5**.: _Let \(f\in\mathcal{F}_{\mathrm{ld}}\). The decision region under \(f\) associated to class \(2\), namely \(X\coloneqq f^{-1}(\{2\})=\{x\in\mathbb{R}^{d}:f(x)=2\}\), is a closed convex set._ Note that this is not necessarily true for our more general feature-convex architectures with \(\varphi\neq\mathrm{Id}\). We continue our theoretical analysis of input-convex classifiers by extending the universal approximation theorem for regressing upon real-valued convex functions (given in Chen et al. (2019)) to the classification setting. In particular, Theorem 3.6 below shows that any input-convex classifier \(f\in\mathcal{F}_{\mathrm{ld}}\) can be approximated arbitrarily well on any compact set by \(\mathrm{ReLU}\) neural networks with nonnegative weights. Here, "arbitrarily well" means that the set of inputs where the neural network prediction differs from that of \(f\) can be made to have arbitrarily small Lebesgue measure. **Theorem 3.6**.: _For any \(f\in\mathcal{F}_{\mathrm{ld}}\), any compact convex subset \(X\) of \(\mathbb{R}^{d}\), and any \(\epsilon>0\), there exists \(\hat{f}\in\hat{\mathcal{F}}_{\mathrm{ld}}\) such that \(m(\{x\in X:\hat{f}(x)\neq f(x)\})<\epsilon\)._ An extension of the proof of Theorem 3.6 combined with Proposition 3.4 yields that input-convex \(\mathrm{ReLU}\) neural networks can perfectly fit convexly separable sampled datasets. **Theorem 3.7**.: _If \((X_{1},X_{2})\) is a convexly separable pair of finite subsets of \(\mathbb{R}^{d}\), then there exists \(\hat{f}\in\hat{\mathcal{F}}_{\mathrm{ld}}\) such that \(\hat{f}(x)=1\) for all \(x\in X_{1}\) and \(\hat{f}(x)=2\) for all \(x\in X_{2}\)._ Theorems 3.6 and 3.7 theoretically justify the particular parameterization in Definition 2.2 for learning feature-convex classifiers to fit convexly separable data. **Empirical convex separability.** Interestingly, we find empirically that high-dimensional image training data is convexly separable. We illustrate this in Appendix D by attempting to reconstruct a CIFAR-10 cat image from a convex combination of the dogs and vice versa; the error is significantly positive for _every_ sample in the training dataset, and image reconstruction is visually poor. This fact, combined with Theorem 3.7, immediately yields the following result. **Corollary 3.8**.: _There exists \(\hat{f}\in\hat{\mathcal{F}}_{\mathrm{ld}}\) such that \(\hat{f}\) achieves perfect training accuracy for the unaugmented CIFAR-10 cats-versus-dogs dataset._ The gap between this theoretical guarantee and our practical performance is large; without the feature map, our CIFAR-10 cats-dogs classifier achieves just \(73.4\%\) training accuracy (Table 3). While high training accuracy may not necessarily imply strong test set performance, Corollary 3.8 demonstrates that the typical deep learning paradigm of overfitting to the training dataset is attainable and that there is at least substantial room for improvement in the design and optimization of input-convex classifiers (Nakkiran et al., 2021). We leave the challenge of overfitting to the CIFAR-10 cats-dogs training data with an input-convex classifier as an open research problem for the field. **Convex separability in high dimensions.** We conclude by investigating _why_ the convex separability property that allows for Corollary 3.8 may hold for natural image datasets. We argue that dimensionality facilitates this phenomenon by showing that data is easily separated by some \(f\in\hat{\mathcal{F}}_{\mathrm{Id}}\) when \(d\) is sufficiently large. In particular, although it may seem restrictive to rely on models in \(\hat{\mathcal{F}}_{\mathrm{Id}}\) with convex class \(2\) decision regions, we show in Theorem 3.9 below that even uninformative data distributions that are seemingly difficult to classify may be fit by such models with high probability as the dimensionality of the data increases. **Theorem 3.9**.: _Consider \(M,N\in\mathbb{N}\). Let \(X_{1}=\{x^{(1)},\ldots,x^{(M)}\}\subseteq\mathbb{R}^{d}\) and \(X_{2}=\{y^{(1)},\ldots,y^{(N)}\}\subseteq\mathbb{R}^{d}\) be samples with all elements \(x_{k}^{(i)},y_{l}^{(j)}\) drawn independently and identically from the uniform probability distribution on \([-1,1]\). Then, it holds that_ \[\mathbb{P}\left((X_{1},X_{2})\text{ is convexly separable}\right) \tag{1}\] \[\quad\geq\begin{cases}1-\left(1-\frac{M!N!}{(M+N)!}\right)^{d}& \text{for all }d\in\mathbb{N},\\ 1&\text{if }d\geq M+N.\end{cases}\] _In particular, \(\hat{\mathcal{F}}_{\mathrm{Id}}\) contains an input-convex \(\mathrm{ReLU}\) neural network that classifies all \(x^{(i)}\) into class \(1\) and all \(y^{(j)}\) into class \(2\) almost surely for sufficiently large dimensions \(d\)._ Although the uniformly distributed data in Theorem 3.9 is unrealistic in practice, the result demonstrates that the class \(\hat{\mathcal{F}}_{\mathrm{Id}}\) of input-convex \(\mathrm{ReLU}\) neural networks has sufficient complexity to fit even the most unstructured data in high dimensions. Despite this ability, researchers have found that current input-convex neural networks tend to not over-fit in practice, yielding small generalization gaps relative to conventional neural networks (Sivaprasad et al., 2021). Achieving the modern deep learning paradigm of overfitting to the training dataset with input-convex networks is an exciting open challenge (Nakkiran et al., 2021). ## 4 Experiments We first describe our baseline methods, feature-convex architecture, and class accuracy balancing procedure. Our results are then reported across a variety of datasets, with further experimental setup details deferred to Appendix E. **Baseline methods.** We consider several state-of-the-art randomized and deterministic baselines. For all datasets, we evaluate the randomized smoothing certificates of Yang et al. (2020) for the Gaussian, Laplacian, and uniform distributions trained with noise augmentation (denoted RS Gaussian, RS Laplacian, and RS Uniform, respectively), as well as the deterministic bound propagation framework \(\alpha,\beta\)-CROWN (Wang et al., 2021), which is scatter plotted since certification is only reported as a binary answer at a given radius. We also evaluate, when applicable, deterministic certified methods for each norm ball. These include the splitting-noise \(\ell_{1}\)-certificates from Levine and Feizi (2021) (denoted Splitting), the orthogonality-based \(\ell_{2}\)-certificates from Trockman and Kolter (2021) (denoted Cayley), and the \(\ell_{\infty}\)-distance-based \(\ell_{\infty}\)-certificates from Zhang et al. (2021) (denoted \(\ell_{\infty}\)-Net). The last two deterministic methods are not evaluated on the large-scale Malimg dataset due to their prohibitive runtime. Furthermore, the \(\ell_{\infty}\)-Net was unable to significantly outperform a random classifier on the CIFAR-10 cats-dogs dataset, and is therefore only included in the MNIST 3-8 and Fashion-MNIST shirts experiments. **Feature-convex architecture.** Our simple experiments (MNIST 3-8 and Malimg) require no feature map to achieve high accuracy (\(\varphi=\mathrm{Id}\)); the Fashion-MNIST shirts dataset also benefited minimally from the feature map inclusion. For the CIFAR-10 cats-dogs task, we let our feature map be the concatenation \(\varphi(x)=(x-\mu,|x-\mu|)\), where \(\mu\) is the channel-wise dataset mean (e.g., size \(3\) for an RGB image) broadcasted to the appropriate dimensions. Our MNIST 3-8 and Malimg architecture then consists of a simple two-hidden-layer input-convex multilayer perceptron with \((n_{1},n_{2})=(200,50)\) hidden features, \(\mathrm{ReLU}\) nonlinearities, and passthrough weights. For the more challenging datasets, we use various instantiations of a convex ConvNet where successive layers have a constant number of channels and image size. This allows for the addition of identity residual connections to each convolution and lets us remove the passthrough connections altogether. Convexity is enforced by projecting relevant weights onto the nonnegative orthant after each epoch and similarly constraining BatchNorm \(\gamma\) parameters to be positive. We initialize positive weight matrices to be drawn uniformly from the interval \([0,\epsilon]\), where \(\epsilon=0.003\) for linear weights and \(\epsilon=0.005\) for convolutional weights. Jacobian regularization is also used to improve our certified radii (Hoffman et al., 2019). **Class accuracy balancing.** Since we consider _asymmetric_ certified robustness, care must be taken to ensure a fair comparison of class \(1\) certificates. Indeed, a constant classifier that always outputs class \(1\) would achieve perfect class \(1\) accuracy and infinite class \(1\) certified radii--yet it would not be a particularly interesting classifier as its accuracy on class \(2\) inputs would be poor. We therefore post-process the decision threshold of each classifier such that the clean class \(1\) and class \(2\) accuracies are equivalent, allowing for a direct comparison of the certification performance for class \(1\). ### Datasets We now introduce the various datasets considered in this work. MNIST 3-8 and Malimg are relatively simple classification problems where near-perfect classification accuracy is attainable; the Malimg dataset falls in this category despite containing relatively large images. Our more challenging settings consist of a Fashion-MNIST shirts dataset as well as CIFAR-10 cats-versus-dogs dataset. Data augmentation details are deferred to Appendix E.4. **MNIST 3-8.** For our MNIST binary classification problem, we choose the problem of distinguishing between \(3\) and \(8\)(LeCun, 1998). These were selected as \(3\) and \(8\) are generally more visually similar and challenging to distinguish than other digit pairs. Images are \(28\times 28\) pixels and greyscale. **Malimg.** Our malware classification experiments use greyscale, bytewise encodings of raw malware binaries Nataraj et al. (2011). Each image pixel corresponds to one byte of data, in the range of \(0\)-\(255\), and successive bytes are added horizontally from left to right on the image until wrapping at some predetermined width. We use the extracted malware images from the seminal dataset Nataraj et al. (2011), padding and cropping images to be \(512\times 512\). Note that licensing concerns generally prevent the distribution of "clean" executable binaries. As this work is focused on providing a general approach to robust classification, in the spirit of reproducibility we instead report classification results between different kinds of malware. Namely, we distinguish between malware from the most numerous "Allaple.A" class (\(2949\) samples) and an identically-sized random subset of all other \(24\) malware classes. To simulate a scenario where we must provide robustness against evasive malware, we provide certificates for the latter collection of classes. **Fashion-MNIST shirts.** The hardest classes to distinguish in the Fashion-MNIST dataset are T-shirts vs shirts, which we take as our two classes (Kayed et al., 2020; Xiao et al., 2017). Images are \(28\times 28\) pixels and greyscale. **CIFAR-10 cats-dogs.** We take as our two CIFAR-10 classes the cat and dog classes since they are relatively difficult to distinguish (Giuste & Vizcarra, 2020; Liu & Mukhopadhyay, 2018; Ho-Phuoc, 2018). Other classes (e.g., ships) are typically easier to classify since large background features (e.g., blue water) are strongly correlated with the target label. Samples are \(32\times 32\) RGB images. ### Discussion Experimental results for \(\ell_{1}\)-norm balls are reported in Figure 2, where our feature-convex classifier radii are similar or better than all other baselines across all datasets. Due to space constraints, we defer the corresponding plots for \(\ell_{2}\)- and \(\ell_{\infty}\)-norm balls to Appendix F, where our certified radii are not dominant but still comparable to methods tailored specifically for a particular norm. We accomplish this while maintaining completely deterministic, closed-form certificates with orders-of-magnitude faster computation time than competitive baselines. For the MNIST 3-8 and Malimg datasets (Figures 1(a) and 1(b)), all methods achieve high clean test accuracy. Our \(\ell_{1}\)-radii scale exceptionally well with the dimensionality of the input, with two orders of magnitude improvement over smoothing baselines for the Malimg dataset. The Malimg certificates in particular have an interesting concrete interpretation. As each pixel corresponds to one byte in the original malware file, an \(\ell_{1}\)-certificate of radius \(r\) provides a robustness certificate for up to \(r\) bytes in the file. Namely, even if a malware designer were to arbitrarily change \(r\) malware bytes, they would be unable to fool our classifier into returning a false negative. This may not have an immediate practical impact as small semantic changes (e.g., reordering unrelated instructions) could induce large \(\ell_{p}\)-norm shifts. However, as randomized smoothing was extended from pixel-space to semantic transformations (Li et al., 2021), we expect that similar extensions can produce practical certifiably robust malware classifiers. While our method produces competitive robustness certificates for \(\ell_{2}\)- and \(\ell_{\infty}\)-norms (Appendix F), it offers the largest improvement for \(\ell_{1}\)-certificates in the high-dimensional image spaces considered. This is likely due to the characteristics of the subgradient dual norm factor in the denominator of Theorem 3.1. The dual of the \(\ell_{1}\)-norm is the \(\ell_{\infty}\)-norm, which selects the largest magnitude element in the gradient of the output logit with respect to the input pixels. As the input image scales, it is natural for the classifier to become less dependent on any one specific pixel, shrinking the denominator in Theorem 3.1. Conversely, when certifying for the \(\ell_{\infty}\)-norm, one must evaluate the \(\ell_{1}\)-norm of the gradient, which scales proportionally to the input size. Nevertheless, we find in Appendix F that our \(\ell_{2}\)- and \(\ell_{\infty}\)-radii are generally comparable those of the baselines while maintaining speed and determinism. Our feature-convex neural network certificates are almost immediate, requiring just one forward pass and one backward pass through the network. This certification procedure requires fewer than \(10\) milliseconds per sample on our hardware and scales well with network size. This is substantially faster than the runtime for randomized smoothing, which scales from several seconds per CIFAR-10 image to minutes for an ImageNet image (Cohen et al., 2019). The only method that rivaled our \(\ell_{1}\)-norm certificates was \(\alpha,\beta\)-CROWN; however, such bound propagation frameworks suffer from exponential computational complexity in network size, and even for small CIFAR-10 ConvNets typically take on the order of minutes to certify nontrivial radii. Unlike the randomized smoothing baselines, our method is completely deterministic in both prediction and certification. Randomized prediction poses a particular problem for randomized smoothing certificates: even for a perturbation of a "certified" magnitude, repeated evaluations at the perturbed point will eventually yield misclassification for any nontrivial classifier. While the splitting-based certificates of Levine and Feizi (2021) are deterministic, they only certify quantized (not continuous) \(\ell_{1}\)-perturbations, which scale poorly to \(\ell_{2}\)- and \(\ell_{\infty}\)-certificates (Appendix F). Furthermore, the certification runtime grows linearly in the smoothing noise \(\sigma\); evaluating the certified radii at \(\sigma\) used for the Malimg experiment takes several minutes per sample. Ablation tests examining the impact of Jacobian regularization, the feature map \(\varphi\), and data augmentation are included in Appendix G. We illustrate the certification performance of our method across all combinations of MNIST classes in Appendix H. ## 5 Conclusion This work introduces the problem of asymmetric certified robustness, which we show naturally applies to a number of practical adversarial settings. We define feature-convex classifiers in this context and theoretically characterize their representation power from geometric, approximation theoretic, and statistical lenses. Closed-form sensitive-class certified robust radii for the feature-convex architecture are provided for arbitrary \(\ell_{p}\)-norms. We find that our \(\ell_{1}\)-robustness certificates in particular match or outperform those of the current state-of-the-art methods, with our \(\ell_{2}\)- and \(\ell_{\infty}\)-radi also competitive to methods tailored for a particular norm. Unlike smoothing and bound propagation baselines, we accomplish this with a completely deterministic and near-immediate computation scheme. We also show theoretically that significant performance improvements should be realizable for natural image datasets such as CIFAR-10 cats-versus-dogs. Possible directions for future research include bridging the gap between the theoretical power of feature-convex models and their practical implementation, as well as exploring more sophisticated choices of the feature map \(\varphi\). Figure 2: Class \(1\) certified radii curves for the \(\ell_{1}\)-norm. Note the \(\log\)-scale on the Malimg plot.
2307.05301
Signal-background separation and energy reconstruction of gamma rays using pattern spectra and convolutional neural networks for the Small-Sized Telescopes of the Cherenkov Telescope Array
Imaging Atmospheric Cherenkov Telescopes (IACTs) detect very-high-energy gamma rays from ground level by capturing the Cherenkov light of the induced particle showers. Convolutional neural networks (CNNs) can be trained on IACT camera images of such events to differentiate the signal from the background and to reconstruct the energy of the initial gamma ray. Pattern spectra provide a 2-dimensional histogram of the sizes and shapes of features comprising an image and they can be used as an input for a CNN to significantly reduce the computational power required to train it. In this work, we generate pattern spectra from simulated gamma-ray and proton images to train a CNN for signal-background separation and energy reconstruction for the Small-Sized Telescopes (SSTs) of the Cherenkov Telescope Array (CTA). A comparison of our results with a CNN directly trained on CTA images shows that the pattern spectra-based analysis is about a factor of three less computationally expensive but not able to compete with the performance of an CTA image-based analysis. Thus, we conclude that the CTA images must be comprised of additional information not represented by the pattern spectra.
J. Aschersleben, T. T. H. Arnesen, R. F. Peletier, M. Vecchi, C. Vlasakidis, M. H. F. Wilkinson
2023-07-11T14:45:52Z
http://arxiv.org/abs/2307.05301v2
Signal-background separation and energy reconstruction of gamma rays using pattern spectra and convolutional neural networks for the Small-Sized Telescopes of the Cherenkov Telescope Array ###### Abstract Imaging Atmospheric Cherenkov Telescopes (IACTs) detect very high-energy gamma rays from ground level by capturing the Cherenkov light of the induced particle showers. Convolutional neural networks (CNNs) can be trained on IACT camera images of such events to differentiate the signal from the background and to reconstruct the energy of the initial gamma ray. Pattern spectra provide a 2-dimensional histogram of the sizes and shapes of features comprising an image and they can be used as an input for a CNN to significantly reduce the computational power required to train it. In this work, we generate pattern spectra from simulated gamma-ray and proton images to train a CNN for signal-background separation and energy reconstruction for the Small-Sized Telescopes (SSTs) of the Cherenkov Telescope Array (CTA). A comparison of our results with a CNN directly trained on CTA images shows that the pattern spectra-based analysis is about a factor of three less computationally expensive but not able to compete with the performance of the CTA images-based analysis. Thus, we conclude that the CTA images must be comprised of additional information not represented by the pattern spectra. keywords: CTA, gamma rays, Imaging Atmospheric Cherenkov Telescopes, atmospheric shower reconstruction, machine learning + Footnote †: journal: NIM-A ## 1 Introduction When a gamma ray reaches the Earth's atmosphere, it induces a cascade of secondary particles which are known as air showers. The secondary particles can reach velocities higher than the speed of light in air, inducing a flash of _Cherenkov light_[1]. The Cherenkov light can be captured by _Imaging Air Cherenkov Telescopes_ (IACTs) from the ground to reconstruct specific properties of the initial particle, such as its type, energy and direction (see [2; 3; 4] for an overview of ground-based gamma-ray astronomy). The _Cherenkov Telescope Array_ (CTA) [5] is the next-generation ground-based observatory for gamma-ray astronomy at very-high energies, offering 5-10 times better flux sensitivity than current-generation gamma-ray telescopes [6], such as H.E.S.S. [7], MAGIC [8] and VERITAS [9]. It will cover a wide energy range between \(20\,\mathrm{GeV}\) to \(300\,\mathrm{TeV}\) benefiting from three different telescope types: _Large-Sized Telescopes_ (LSTs), _Medium-Sized Telescopes_ (MSTs) and _Small-Sized Telescopes_ (SSTs). The CTA Observatory will be distributed on two arrays in the northern hemisphere in La Palma (Spain) and the southern hemisphere near Paranal (Chile). CTA will outperform the energy and angular resolution of current instruments providing an energy resolution of \(\sim 5\,\mathrm{\char 37}\) around \(1\,\mathrm{TeV}\) and an angular resolution of \(1\,^{\prime}\) at its upper energy range. With its short timescale capabilities and large field of view of \(4.5\,^{\circ}-8.5\,^{\circ}\), it will enable the observation of a wide range of astronomical sources, including transient, high-variability or extended gamma-ray sources. Several analysis methods for IACT data have been developed to classify the initial particle and reconstruct its energy and direction. _Hillas parameters_[10] are one of the first reconstruction techniques proposed by A. M. Hillas in 1985. They describe features of the Cherenkov emission within the camera images and are widely used as input to machine learning algorithms like _Random Forest_[11] or _Boosted Decision Trees_[12; 13; 14] to perform full event reconstruction of gamma rays. Another approach is the _ImPACT_ algorithm [15], which performs event reconstruction using expected image templates generated from Monte Carlo simulations. Other methods such as _model analysis_[16] and _3D model analysis_[17], which are based on a semi-analytical shower model and a Gaussian photosphere shower model respectively, managed to be more sensitive to certain properties of the shower [18]. Recently, _convolutional neural networks_ (CNNs) [19; 20; 21] have been proposed and applied to IACT data [22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34]. CNNs are machine learning algorithms that are specialised for image data and are currently one of the most successful tools for image classification and regression tasks [35]. They rely on _convolutional layers_ which consist of image filters that are able to extract relevant features within an image. Among many others, models such as _AlexNet_[36], _GoogLeNet_[37] and _ResNet_[38] established many new techniques, such as the _Rectified Linear Unit_ (ReLU) [39] activation function and deeper architectures, which set the milestones for many upcoming architectures. _ResNet_ won the _ImageNet Large Scale Visual Recognition Challenge_ (ILSVRC) in 2015 by introducing _shortcut connections_ into the architecture and achieving a _top-5 classification error_ of only 3.6 % [40]. CNNs that contain these shortcut connections often achieve higher performances and are referred to as _residual neural networks_ (ResNets). The first event classifications with a CNN trained on IACT images have been presented in [22] and [23], which have demonstrated the signal-background separation capabilities of CNNs. Later work has shown the energy and direction reconstruction capabilities of gamma rays with CNNs [24; 25; 26; 27], their ability to run in stereo telescope mode [28; 29; 30] and to be applied to real data [31; 32; 33]. However, one of the main drawbacks of this method is that the training of CNNs is computationally very expensive [41]. It typically requires access to computing clusters with powerful graphics processing units (GPUs) and large amounts of random-access memory (RAM). The larger the dimension of the input image, the larger the computational power and time needed for the CNN training. A significant reduction of the dimension of the input image without any performance losses would therefore result in substantial savings in hardware and human resources, increase the efficiency of related scientific works and lower the environmental impact of CNNs [42]. An approach to this problem is _pattern spectra_[43], which are commonly used tools for image classification [44; 45; 46] and can significantly reduce the computational power needed to train CNNs. They provide a 2-dimensional distribution of sizes and shapes of features within an image and can be constructed using a technique known as granulometries [47]. The features within the image are extracted with connected operators [48], which merge regions within an image with the same grey scale value. Compared to other feature extraction techniques, this approach has the advantage of not introducing any distortions into the image. In this work, we generate pattern spectra from simulated CTA images to apply them on a ResNet for signal-background separation and energy reconstruction of gamma rays. The application of a ResNet on pattern spectra takes advantage of their 2D nature by selecting relevant combinations of features within the CTA images. Our pattern spectra algorithm is based on the work presented in [44], which provides two main advantages compared to other existing pattern spectra algorithms: (i) the computing time for creating the pattern spectra is independent of its dimensions and (ii) it is significantly less sensitive to noise. These properties merit the investigation of pattern spectra-based analysis for IACTs. Direction reconstruction of gamma rays is not considered here since pattern spectra are rotation invariant, meaning that the same CTA image rotated by an arbitrary angle would result in the same pattern spectrum. By generating pattern spectra from simulated CTA images, we aim to obtain a competitive algorithm that is significantly faster and less computationally intensive while keeping comparable performance to a CNN trained on CTA images in terms of signal-background separation and energy reconstruction of gamma rays. The structure of this article is as follows: In Section 2, the CTA dataset used in this analysis is described. Section 3 is devoted to our analysis methods including the pattern spectra algorithm, the ResNet architecture and the performance evaluation methods for our algorithms. The results are shown in Section 4 and discussed in detail in Section 5. Finally, we state our conclusions in Section 6. The source code of this project is publicly available at [49]. ## 2 Dataset The dataset consists of simulated gamma-ray and proton events detected by the southern CTA array (Prod5_DL1 (ctapipe v0.10.5 [51]), zenith angle of 20 \({}^{\circ}\), North pointing [52; 53]). Due to the hexagonal pixels integrated in the LSTs and MSTs cameras, which cannot be processed by the current version of the pattern spectra algorithm, only the 37 SSTs with rectangular pixels are considered in this analysis. The SST images containing the charge information, i.e. the integrated photodetector pulse, will be referred to as _CTA images_ in the following. CTA images generated by gamma rays with an energy between 500 GeV and 100 TeV and protons with an energy between 1.5 TeV and 100 TeV have been considered for this study to match the operating energy range of the SSTs. For the energy reconstruction \(\sim 3\cdot 10^{6}\) gamma-ray events generated with a \(0.4\,^{\circ}\) offset from the telescope pointing position, referred to as _pointlike gamma rays_ in the following, are used. For the signal-background separation, \(\sim 2\cdot 10^{6}\)_diffuse gamma rays_ and \(\sim 2\cdot 10^{6}\)_diffuse protons_ are used, whereas the term _diffuse_ describes events generated in a view cone of \(10\,^{\circ}\). The pointlike and diffuse events are considered in the analysis to represent real observation conditions. When observing a source, background events reach the telescopes not only from the direction of the source but potentially from a much larger view cone. However, using pointlike gamma-rays and diffuse proton events for signal-background separation would introduce a bias in the learning process of the CNN. Therefore, we consider diffuse events for the signal-background separation and pointlike events for the energy reconstruction task. In particular for high energies, the dataset often includes single events that were captured by multiple SSTs. This results in several CTA images for a single event. Since the construction and training of a CNN, that is able to handle a varying amount of input images, is very challenging, we constructed a single CTA Figure 1: Visual representation of the pattern spectra algorithm (adapted from [44; 50]) image for each event as a first step towards the implementation of pattern spectra for the analysis of CTA images. In order to obtain a single CTA image per event, all CTA images of the same event are combined into a single image by adding up the individual pixel values of each image. We are aware that this is reducing the performance of the array, but we adopt this strategy to simplify our proof of concept work. However, we do not promote the idea of image stacking for CNN analyses with CTA data when trying to maximise the performance of the CNN. ## 3 Analysis ### Pattern spectra The algorithm used to extract pattern spectra from the CTA images is based on the work presented in [44] and will be briefly summarised in the following. Let \(f\) be a grey-scale image with grey levels \(h\). Consider an image domain \(E\subseteq\mathbb{R}^{2}\) and let the set \(X\subseteq E\) denote a binary image with domain \(E\). The _grain_ of a binary image \(X\) is defined as a connected component of \(X\). The _peak components_\(P_{h}^{k}(f)\) of an image \(f\) are defined as the \(k\)th grain of the threshold set \(T_{h}(f)\), which is defined as \[T_{h}(f)=\{x\in E|f(x)\geq h\}. \tag{1}\] For each image \(f\), a _Max-tree_ is computed according to the algorithm described in [55]. The Max-tree is composed of _nodes_\(N_{h}^{k}(f)\), which consist of the subset of the peak components \(P_{h}^{k}(f)\). Figure 1 (a) shows an example of a 2D grey-scale image, (b) the corresponding peak components \(P_{h}^{k}(f)\) and (c) its Max-tree with nodes \(N_{h}^{k}(f)\). The pattern spectra are based on the size and shape attributes of the peak components \(P_{h}^{k}(f)\). The size attribute corresponds to the area \(A(P_{h}^{k}(f))\), which is computed by the sum of the pixels belonging to the detected feature. The shape attribute corresponds to \(I/A^{2}\) with the moment of inertia \(I\) describing the sum of squared differences to the centre of gravity of the feature. The size and shape attributes are binned into \(N=20\) size classes \(s\) and shape classes \(r\), which results in a good compromise between the performance of the pattern spectra and the computational power needed to train the ResNet. The 2D pattern spectrum is computed from the Max-tree as follows [44]: 1. Construct a 2D array \(\Phi[r,s]\) of size \(N\times N=20\times 20\). 2. Set all elements of \(\Phi[r,s]\) to zero. 3. For each node \(N_{h}^{k}(f)\) of the Max-tree, compute the size class \(r\) from the area \(A(P_{h}^{k}(f))\), the shape class \(s\) from \(I(P_{h}^{k}(f))/A(P_{h}^{k}(f))^{2}\) and the grey-level difference \(\delta_{h}\) between the current node and its parent. 4. Add the product of \(\delta_{h}\) and \(A(P_{h}^{k}(f))\) to \(\Phi[r,s]\). An example of a pattern spectrum extracted from a CTA image is shown in Figure 2. The image in the top-left shows a CTA image of a \(1.9\,\mathrm{TeV}\) gamma-ray event that was captured by eight Figure 2: Top-left: CTA image of a \(1.9\,\mathrm{TeV}\) gamma-ray event captured by eight SSTs. Bottom-left: pattern spectrum extracted from the CTA image. Middle-top: CTA image with a set of detected features highlighted in red. Middle-bottom: pattern spectrum with pixel corresponding to the detected features (small \(A\) and \(I/A^{2}\)) highlighted in red. Right-top: CTA images with a different set of detected (sub-features highlighted in (red) orange. Right-bottom: pattern spectrum with pixels corresponding to the detected features (intermediate \(A\) and \(I/A^{2}\)) highlighted in red. SSTs. The bright features in the centre of the image correspond to the Cherenkov emission induced by the particle shower. Due to the different locations of the SSTs, the Cherenkov light is captured with different intensities and at different positions on the SST cameras. The pattern spectrum generated from the CTA image is shown in the bottom-left. Each pattern spectrum pixel represents a set of detected features. An example of the detected features is shown in the middle of Figure 2. The image on top shows a set of detected features within the CTA image highlighted in red. The image at the bottom shows the pattern spectrum with the red pixel representing these features. This specific example shows features with a small \(A\) and small \(I/A^{2}\) referring to features with a small size and a circular-like shape. They correspond to individual pixels in the CTA image and represent mostly noise. Another example is shown in the top-right and bottom-right of Figure 2. Compared to the previous example, the red-marked pattern spectrum pixels correspond to larger \(A\) and \(I/A^{2}\) values. Thus, the highlighted objects (red/orange) in the CTA image correspond to features with a larger size and more elliptical-like shape. The detected features in this example are of particular interest since they represent the Cherenkov photons induced by the particle shower, which contain information about the type and energy of the initial particle. ### Residual neural network architecture For the signal-background separation and energy reconstruction of gamma-ray events, two individual but almost identical ResNet architectures are constructed and trained with either CTA images or pattern spectra. The architectures of our ResNets are identical to the ResNets presented in [54] and are based on the work presented in [30; 38; 56]. The ResNet is illustrated in Figure 3. Due to the rather shallow architecture compared to the ResNet presented in [38], we refer to our architectures as _thin residual neural networks_ (TRNs) in the following. They are constructed using _Tensorflow 2.3.1_[57] and _Keras 2.4.3_[58] and consist of 13 convolutional layers with _Rectified Linear Unit_ (ReLU) [39] activation function, a _global average pooling layer_ and two _fully connected (dense) layers_ with 64 and 32 neurons respectively. The output layer consists of a single neuron for the energy reconstruction and two neurons with _softmax_[59] activation function for the signal-background separation. _Shortcut connections_[38] at every third convolutional layer were implemented in order to improve the stability and performance of the algorithm. The solid arrows in Figure 3 represent linear shortcut connections, in which the input of a _building block_\(x\) is added to the output of the last layer of the building block \(F(x)\). If the input and output of a building block have different dimensions, the input \(x\) is put into another convolutional layer with the same number of filters as the last layer of the building block. The output of this residual operation \(G(x)\) is added to the output of the last layer of the building block \(F(x)\). A filter size of \(1\times 1\) is used for all shortcut connections with a convolutional operation. In total, the two TRNs have about 150000 trainable parameters. Figure 3: Top: Architecture of the thin residual neural network (TRN) [54]. For each convolutional layer, the filter size and number of filters are specified. Bottom: Building block with a linear shortcut connection (left) and non-linear shortcut connection (right) (adapted from [38]). ### Experiments The TRNs described in the previous section are trained and evaluated 10 times each on the datasets for both signal-background separation and energy reconstruction to perform a statistical analysis of the training process. Similar to the work presented in [30], a multiplicity cut of four or more triggered telescopes is applied for both the gamma-ray and proton events. The dataset is split into 90 % training data, from which 10 % is used as validation data, and 10 % test data. The weights of the TRN are initialized using the Glorot Uniform Initializer [60] and the training, validation and test data are randomized for each run. The _adaptive moment_ (ADAM) optimizer [61] with a learning rate of 0.001, and a batch size of 32 is used for the TRN training. The training is stopped if there is no improvement on the validation dataset for over 20 epochs, and the model with the lowest validation loss is saved. The _categorical cross entropy_ and _mean squared error_[62] are applied as loss functions for the signal-background separation and energy reconstruction, respectively. The results shown in Section 4 are obtained by evaluating the performance of each TRN on the test data. #### 3.3.1 Signal-background separation Each event is labelled by its _gammaness_\(\Gamma\), whereas \(\Gamma=1\) corresponds to a gamma-ray (photon) and \(\Gamma=0\) corresponds to a proton. The output of the TRN is a \(\Gamma\)-value between 0 and 1, which describes a pseudo-probability of the event being a photon according to the TRN. For a fixed \(\Gamma\)-threshold \(\alpha_{\Gamma}\), the _photon efficiency_\(\eta_{\gamma}\) is defined as \(\eta_{\gamma}=TP/P\), where \(TP\) is the number of _true positives_, i.e. photon events with \(\Gamma\geq\alpha_{\Gamma}\) (correctly classified photons), and \(P\) is the total number of positives (photons) that pass the selection criteria described in Section 2. Similarly, the _proton efficiency_\(\eta_{p}\) is defined as \(\eta_{p}=FP/N\), where \(FP\) is the number of _false positives_, i.e. proton events with \(\Gamma<\alpha_{\Gamma}\) (misclassified protons), and \(N\) is the total number of negatives (protons) that pass the selection criteria. A good classifier results in a high photon efficiency \(\eta_{\gamma}\) and a low proton efficiency \(\eta_{p}\) for a given \(\Gamma\)-threshold. In order to evaluate the performance of our TRNs, the efficiencies as a function of the \(\Gamma\)-threshold and the _effective area_\(A_{\rm eff}\) as a function of the _true energy_\(E_{\rm true}\) are calculated. The effective area is determined by \(A_{\rm eff}=\tilde{\eta}_{\gamma}\cdot A_{\rm geom}\), where \(A_{\rm geom}\) is the geometrical area of the instrument, i.e. \(A_{\rm geom}=\pi r_{\rm max}^{2}\) with \(r_{\rm max}\) being the maximum simulated impact radius, and \(\tilde{\eta}_{\gamma}=TP/\tilde{P}\) with \(\tilde{P}\) being the total number of simulated photons, including the events that did not pass the selection criteria in Section 2. Similarly, we define \(\tilde{\eta}_{p}=FP/\tilde{N}\) with \(\tilde{N}\) being the total number of simulated protons. The energy range is split into seven logarithmic bins, whereas each event is assigned to an energy bin based on its true energy \(E_{\rm true}\). The effective area is then calculated for each energy bin by increasing the \(\Gamma\)-threshold until \(\tilde{\eta}_{p}=10^{-3}\) is reached and extracting the corresponding \(\tilde{\eta}_{\gamma}\). The value \(\tilde{\eta}_{p}=10^{-3}\) is motivated by the photon flux of Figure 4: Example of the gammaness distributions obtained from a single TRN trained with CTA images (left) and pattern spectra (right). Figure 5: Mean photon efficiency \(\eta_{\gamma}\) and proton efficiency \(\eta_{p}\) as a function of the \(\Gamma\)-threshold \(\alpha_{\Gamma}\) obtained from 10 independent TRNs. the Crab Nebula being about three orders of magnitude lower than the isotropic flux of cosmic rays (CRs) within an angle of 1 deg around the direction of the source: \(\Phi_{\gamma}^{\rm Crab}\approx 10^{-3}\cdot\Phi_{\rm CR}\)[2]. Furthermore, the _receiver operating characteristic_ (ROC) curve [63] is determined. The ROC curve describes the photon efficiency \(\eta_{\gamma}\) versus the proton efficiency \(\eta_{p}\). The _area under the ROC curve_ (AUC) is calculated and used as a measure of the performance of each TRN. For part of our calculations, we make use of pyrf rv 0.7.0 [64], which is a python library for the generation of Instrument Response Functions (IRFs) and sensitivities for CTA. From the 10 TRNs, the mean efficiencies, effective area, ROC curve and AUC value are calculated for both the CTA images and pattern spectra-based analyses. ### Energy reconstruction The gamma-ray events are labelled by their true energy \(E_{\rm true}\), which the TRN learns to predict based on the training input. The performance of the TRN on the test data is evaluated by comparing the reconstructed energy \(E_{\rm rec}\) of the TRN with the true energy \(E_{\rm true}\) of the initial gamma ray. Therefore, the _relative energy error_\(\Delta E/E_{\rm true}=(E_{\rm rec}-E_{\rm true})/E_{\rm true}\) is calculated for each event. The whole energy range between \(500\,\rm GeV\) and \(100\,\rm TeV\) is split into seven logarithmic bins and each event is assigned to an energy bin based on its true energy \(E_{\rm true}\). For each of these energy bins, the distribution of the relative energy error \(\Delta E/E_{\rm true}\) is determined and its median calculated. The median of \(\Delta E/E_{\rm true}\) is referred to as the _energy bias_ in the following. Small (high) energy biases indicate high (low) accuracies. The distributions of the relative energy error \(\Delta E/E_{\rm true}\) are then bias-corrected by subtracting the median, i.e. \((\Delta E/E_{\rm true})_{\rm corr}=\Delta E/E_{\rm true}-{\rm median}(\Delta E /E_{\rm true})\). The _energy resolution_ is defined as the 68th percentile of the distribution \([(\Delta E/E_{\rm true})_{\rm corr}]\). From the 10 TRNs, the mean energy bias and energy resolution with their standard deviation are calculated for each energy bin for both the CTA images and pattern spectra-based analyses. ## 4 Results ### Signal-background separation Two examples of the gammaness distributions obtained from a single TRN trained with the CTA images and pattern spectra are shown in Figure 4. Figure 4 (left) shows a distinct separation between photon and proton events for the TRN trained with CTA images. The majority of photon events are classified with \(\Gamma=1\) and the majority of proton events with \(\Gamma=0\). The number of proton (photon) events continuously decreases for larger (smaller) \(\Gamma\)-values, which indicates a good separation capability of the TRN. Figure 4 (right) shows the performance of the TRN trained with the pattern spectra, which results in a lower signal-background separation capability compared to the TRN trained with CTA images. Once again, the majority of photon events are classified with \(\Gamma=1\) and the majority of proton events with \(\Gamma=0\). However, the distributions decrease less rapidly compared to the CTA images-based analysis. The mean photon efficiency \(\eta_{\gamma}\) and proton efficiency \(\eta_{p}\) as a function of the \(\Gamma\)-threshold \(a_{\rm T}\) are shown in Figure 5. The shaded regions in this figure and the upcoming ones depict the standard deviation across the 10 TRNs. Both the photon efficiency and proton efficiency decrease steadily for an increasing \(\alpha_{\Gamma}\)-value. Up to \(\Gamma\sim 0.1\) the pattern spectra-based analysis results in a very similar photon efficiency but in a much higher proton efficiency in comparison to the CTA images-based analysis. The proton efficiency of the pattern spectra approaches a similar value compared to the CTA images at \(\Gamma\sim 0.9\) at which, however, the CTA images outperform the pattern spectra in the photon efficiency. Therefore, the CTA images result overall Figure 6: Left: mean effective area \(A_{\rm eff}\) as a function of the true energy \(E_{\rm true}\) obtained from 10 independent TRNs. Right: mean ROC curve and mean AUC value obtained from 10 independent TRNs. The solid black line corresponds to a ROC curve expected from a random classifier. The performances stated here do not represent the expected performance by the CTA Observatory at the end of its construction phase. in better photon and proton efficiencies independent of the \(\Gamma\)-threshold \(\alpha_{\rm{T}}\). Figure 6 (left) shows the mean effective area \(A_{\rm{eff}}\) as a function of the true energy \(E_{\rm{true}}\). The CTA images result in a higher effective area than the pattern spectra for all energies. The difference between the two analyses increases with increasing energy. The CTA images result in a maximum effective area of \(\sim 12.8\times 10^{5}\,\rm{m}^{2}\) at \(\sim 80\,\rm{TeV}\), whereas the pattern spectra result in a maximum effective area of \(\sim 7.0\times 10^{5}\,\rm{m}^{2}\) at \(\sim 80\,\rm{TeV}\), which corresponds to factor of 1.8 between the two analyses. The mean ROC curve and corresponding AUC value are shown in Figure 6 (right). As expected from the gammaness distributions discussed above, the ROC curve obtained from the CTA images is significantly steeper than the ROC curve obtained from the pattern spectra. The mean AUC value of 0.987 for the CTA images is therefore significantly larger than the value of 0.929 obtained from the pattern spectra by a factor of 1.06. Therefore, the TRN trained with CTA images shows a higher signal-background capability than the pattern spectral-based analysis. ### Energy reconstruction Figure 7 shows two examples of the energy migration matrices, i.e. the 2D histogram of \(E_{\rm{rec}}\) against \(E_{\rm{true}}\), obtained from a single TRN trained with the CTA images and pattern spectra. Most of the events are distributed around the \(E_{\rm{rec}}=E_{\rm{true}}\) line for both the CTA images and pattern spectra-based analysis. However, the distribution obtained from the pattern spectra is more spread compared to the CTA images-based analysis. The mean energy accuracy obtained from 10 independent TRNs is shown in Figure 8 (left). The energy biases obtained from the CTA images-based analysis are closely distributed around 0 with the largest energy bias of \(\sim 5\,\%\) at the lowest energy bin. The energy biases obtained from the pattern spectra-based Figure 8: Mean energy accuracy (left) and resolution (right) obtained from 10 independent TRNs. The dashed grey line represents the CTA energy resolution requirement for the southern CTA array [65]. The performances stated here do not represent the expected performance by the CTA Observatory at the end of its construction phase. Figure 7: Example of the energy migration matrix obtained from a single TRN trained with CTA images (left) and pattern spectra (right). analysis reach up to \(\sim 20\,\%\) with the largest energy biases at the lowest and highest energy bin. The absolute value of the energy bias obtained from the pattern spectra-based analysis is larger than the values obtained from the CTA images for all energies. The mean energy resolution obtained from 10 independent TRNs is shown in Figure 8 (right). The CTA images-based analysis ranges from 0.08 to 0.12 with a minimum at \(\sim 7.5\,\mathrm{TeV}\). While we simplified our analysis by stacking CTA images for each event, the energy resolution still meets the CTA requirements [65] for all energy bins, except for the lowest energy bin. The pattern spectra result in an energy resolution between 0.22 and 0.25 with a minimum at the highest energy bin and does not meet the CTA requirements. Thus, the CTA images-based analysis outperforms the pattern spectra for all energies with a maximum factor of 2.9 at \(\sim 7.5\,\mathrm{TeV}\) between the two curves. ## 5 Discussion A comparison of the computational performance of the analyses is shown in Figure 9. The TRN training with pattern spectra is about a factor of 2.5 faster and requires a factor of 2.5 less RAM compared to the TRN training with CTA images. The pattern spectra are capable of detecting and classifying relevant features in the CTA images, which is illustrated by the gammaness distributions shown in Figure 4 (right) and the energy migration matrix shown in Figure 7 (right). However, the pattern spectra-based analysis is outperformed by the CTA images with respect to their signal-background and energy reconstruction capabilities. For a given \(\Gamma\)-threshold \(\alpha_{\mathrm{T}}\), the pattern spectra result in a poorer photon and proton efficiency compared to the CTA images (see Figure 5), which is a main drawback of the analysis since both efficiencies are important quantities for the analysis of real gamma-ray data. Moreover, we infer from the effective area versus energy plot shown in Figure 6 (left) that the signal-background capabilities of the pattern spectra-based analysis are below the capabilities of the CTA images-based analysis independent of the energy of the initial particle. The AUC value obtained from the CTA images is a factor 1.06 larger than the pattern spectra AUC value and illustrates once again the overall lower signal-background capabilities of the pattern spectra-based analysis. The CTA images result in a better energy resolution and a lower energy bias for all energies compared to the pattern spectra. Although our choice of attributes, i.e. size and shape attribute, is well-motivated, these two attributes do not seem to be sufficient to fully describe all relevant features within the CTA images. Potentially, the pattern spectra might not be able to detect, e.g., the electromagnetic substructure in proton showers. Other feature attributes, e.g. the _perimeter_, _sum of grey levels_ and _compactness_ (_perimeter_/\(A^{2}\)), were tested for both signal-background separation and energy reconstruction but did not result in a significantly better performance. Furthermore, we applied pattern spectra on other algorithms including classification and regression trees (CART) [66], Learning Vector Quantization (LVQ) and Generalized Matrix Learning Vector Quantization (GMLVQ) [67]. None of these algorithms achieved a better performance than the TRN. We, therefore, conclude that the TRN relies on features within the CTA images that are not detected by the pattern spectra algorithm. The performances stated in this work do not represent the expected performance by the CTA Observatory at the end of its construction phase. ## 6 Conclusions For the first time, signal-background separation and energy reconstruction of gamma rays were performed under the application of pattern spectra. We have shown that the pattern spectra algorithm has the capability to detect and classify relevant features in IACT images. The detected features are capable of differentiating between gamma-ray and proton events and to reconstruct the energy of gamma-ray events. The training of the TRN with pattern spectra requires 2.5 less RAM and is about a factor of 2.5 faster than the TRN trained with CTA images, Figure 9: Mean time (left) and RAM (right) required to train the TRN for signal-background separation and energy reconstruction obtained from 10 independent TRNs for each analysis. The training was performed on a _Nvidia A100 GPU_. which agrees with our expectation due to the smaller size of the pattern spectra as compared to CTA images. The reduction in computational power was one of the main motivations to test the performance of pattern spectra on IACT data. However, the pattern spectra-based analysis is not competitive with the CTA images-based analysis in signal-background separation and energy reconstruction. The AUC value, which is a measure of the signal-background separation capability of an algorithm, obtained from the CTA images is a factor 1.06 larger than the value obtained from the pattern spectra. The CTA images result in better energy accuracy and energy resolution for all energies with a maximum factor of 2.9 at \(\sim 7.5\,\mathrm{TeV}\) in energy resolution compared to the pattern spectra. We, therefore, conclude that the relevant features within the CTA images are not sufficiently detected or described by our choice of size and shape attributes. Other sets of attributes were tested but resulted in no major improvements. Thus, the TRN trained on CTA images must rely on additional features not captured by the pattern spectra. In other applications, especially when the input images are larger, or vary in size, the results may be different. ## Acknowledgements This work was conducted in the context of the CTA Consortium and CTA Observatory. We gratefully acknowledge financial support from the agencies and organizations listed at [http://www.cta-observatory.org/consortium](http://www.cta-observatory.org/consortium) acknowledgements. We would like to thank the Center for Information Technology of the University of Groningen for their support and for providing access to the Peregrine high-performance computing cluster.
2305.04100
Rhetorical Role Labeling of Legal Documents using Transformers and Graph Neural Networks
A legal document is usually long and dense requiring human effort to parse it. It also contains significant amounts of jargon which make deriving insights from it using existing models a poor approach. This paper presents the approaches undertaken to perform the task of rhetorical role labelling on Indian Court Judgements as part of SemEval Task 6: understanding legal texts, shared subtask A. We experiment with graph based approaches like Graph Convolutional Networks and Label Propagation Algorithm, and transformer-based approaches including variants of BERT to improve accuracy scores on text classification of complex legal documents.
Anshika Gupta, Shaz Furniturewala, Vijay Kumari, Yashvardhan Sharma
2023-05-06T17:04:51Z
http://arxiv.org/abs/2305.04100v1
Steno AI at SemEval-2023 Task 6: Rhetorical Role Labeling of Legal Documents using Transformers and Graph Neural Networks ###### Abstract A legal document is usually long and dense requiring human effort to parse it. It also contains significant amounts of jargon which make deriving insights from it using existing models a poor approach. This paper presents the approaches undertaken to perform the task of rhetorical role labelling on Indian Court Judgements as part of SemEval Task 6: understanding legal texts, shared subtask A (Modi et al., 2023). We experiment with graph based approaches like Graph Convolutional Networks and Label Propagation Algorithm, and transformer-based approaches including variants of BERT to improve accuracy scores on text classification of complex legal documents. ## 1 Introduction Rhetorical Role Labelling for Legal Documents refers to the task of classifying sentences from court judgements into various categories depending on their semantic function in the document. This task is important as it not only has direct applications in the legal industry but also has the ability to aid several other tasks on legal documents such as summarization and legal search. This task is still in it's early stages, with huge scope for improvement over the current state-of-the-art. To facilitate automatic interpretation of legal documents by dividing them into topic coherent components, a rhetorical role corpus was created for Task 6, sub-task A of The International Workshop on Semantic Evaluation (Modi et al., 2023). Several applications of legal AI, including judgment summarizing, judgment outcome prediction, precedent search, etc., depend on this classification. ## 2 Related Works with Comparison The predominant technique used in Rhetorical Role Labeling over large datasets is based on the use of transformer-based models like LEGAL-BERT (Chalkidis et al., 2020) and ERNIE 2.0 (Sun et al., 2020), augmented by various heuristics or neural network models. The accuracy of these approaches has remained low over the years. The results are summarized in Table 1. The dataset (Parikh et al., 2022) used to implement the above approaches is relatively small, consisting only of a few hundred annotated documents and 7 sentence classes. ## 3 Dataset The dataset (Kalamkar et al., 2022) is made up of publicly available Indian Supreme Court Judgements. It consists of 244 train documents, 30 validation documents and 50 test documents making a total of 36023 sentences. For every document, each sentence has been categorized into one of 13 semantic categories as follows: 1. **PREAMBLE**: The initial sentences of a judgement mentioning the relevant parties 2. **FAC**: Sentences that describe the events that led to the filing of the case 3. **RLC**: Judgments given by the lower courts based on which the present appeal was made to the present court 4. **ISSUE**: Key points mentioned by the court upon which the verdict needs to be delivered 5. **ARG_PETITIONER**: Arguments made by the petitioner 6. **ARG_RESPONDENT**: Arguments made by the respondent \begin{table} \begin{tabular}{l c} \hline \hline **Model** & **F1 score** \\ \hline LEGAL-BERT & 0.557 \\ LEGAL-BERT + Neural Net & 0.517 \\ ERNIE 2.0 & 0.505 \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of related works on the task of rhetorical role labelling on legal text. (Parikh et al., 2022) 7. **ANALYSIS**: Court discussion of the facts, and evidence of the case 8. **STA**: Relevant statute cited 9. **PRE_RELIED**: Sentences where the precedent discussed is relied upon 10. **PRE_NOT_RELIED**: Sentences where the precedent discussed is not relied upon 11. **Ratio**: Sentences that denote the rationale/reasoning given by the Court for the final judgement 12. **RPC**: Sentences that denote the final decision given by the Court for the case 13. **None**: A sentence not belonging to any of the 12 categories ## 4 Proposed Techniques and Algorithms We try several different approaches for the task at hand. All our models use LEGAL-BERT as their base, and use various methods for further processing and refining of results. The LEGAL-BERT family of models is a modified pretrained model based on the architecture of BERT Devlin et al. (2019). The variant used in this paper is LEGAL-BERT-BASE, a model with 12 layers, 768 hidden units, and 12 attention heads. It has a total of 110M parameters and is pretrained for 40 epochs on a corpus of 12 GB worth of legal texts. This model was fine-tuned on the task dataset for 2 epochs with a learning rate of 1e-5 using the Adam optimizer and Cross entropy loss ### Direct Classification of CLS tokens First, we used the default classifier of LEGAL-BERT to find the first set of predictions, to establish a baseline for our further experiments. Our next step used the CLS tokens extracted from the final hidden layer of this trained model. Similar to the methodology of Gaoa et al.(2020) and Furniturewala et al.(2021) we utilised the CLS tokens from LEGAL-BERT for further classification models. This CLS token is a 768-dimensional semantic feature that represents BERT's understanding of the text input. It is a fixed embedding present as the first token in BERT's output to the classifier and contains all the useful extracted information present in the input text. We tried directly applying various multi-layer neural networks to the extracted CLS tokens. These two models served as a baseline to assess the efficacy of our methods. ### Graph-Based Approaches We implemented classificaton systems based on graph architectures. We modeled the data into a graph using cosine similarity on the CLS tokens generated by LEGAL-BERT. An edge was created between two sentences if and only if their CLS tokens had cosine similarity greater than 0.5, with the cosine similarity acting as edge weight. The threshold was included to minimize the presence of noise-heavy edges in the graph. \[\cos(\mathbf{x},\mathbf{y})=\frac{\sum_{i=1}^{n}\mathbf{x}_{i}\mathbf{y}_{i}}{ \sqrt{\sum_{i=1}^{n}{(\mathbf{x}_{i})^{2}}}\sqrt{\sum_{i=1}^{n}{(\mathbf{y}_ {i})^{2}}}} \tag{1}\] The cosine similarity between two nodes, X and Y, is defined in equation (1), where x and y are the CLS tokens for nodes X and Y respectively, and n is the length of the CLS token, i.e. 768 in this case. The function for the final adjacency matrix is defined equation (2). \[A_{XY}=\begin{cases}\cos(\mathbf{x},\mathbf{y})&if\ \ \cos(\mathbf{x},\mathbf{y})>0.5\\ 0&otherwise\end{cases} \tag{2}\] On this graph, we performed the label diffusion algorithm Zhou et al. (2003), to establish a graph-based baseline for our system. Random walk label diffusion assigns labels to an unlabeled node using the average of it's neighbours, weighted by their distance from the node. \[F^{t+1}=\alpha\cdot P\cdot F^{t}+(1-\alpha)*Y \tag{3}\] Figure 1: Extracting CLS Tokens Furniturewala (2021) \[P=D^{-1/2}\cdot A\cdot D^{-1/2} \tag{4}\] \[F^{*}=(1-\alpha)*(I-\alpha P)^{-1}\cdot Y \tag{5}\] To implement it, we combined the train and validation label array, one-hot encoded it and masked the validation labels. We then used equation (5) to generate predictions for each sentence. Here P is the normalised adjacency matrix, Y is the array of one-hot encoded labels, \(\alpha\) is a hyper-parameter, D is the degree matrix, and Z is the array of predicted labels. The matrix P is obtained via equation (4), normalizing the adjacency matrix A using the square root inverse of the degree matrix D. For our experimentation, we used \(\alpha=0.5\). Furthermore, we used a two-layer Graph Convolution Network (GCN) [10] to perform classifications on the data. Inspired by the methodology of BERTGCN [12], we used the LEGAL-BERT embeddings of each sentence as the node representation for our graph, and then performed graph convolutions on it. The GCN architecture uses trainable weights to identify the optimal weightage that each neighbour of each node should have on its label. The use of two layers allows us to incorporate the context of one-hop neighbours into the label of a particular node. \[Z =f(X,A) \tag{6}\] \[=softmax(\hat{A}\cdot ReLU(\hat{A}XW^{(0)})W^{(1)}) \tag{7}\] We used equation (7) to predict the labels of the validation set. Here, A represents the symmetrically normalized adjacency matrix, X is the feature vector which in this case is the LEGAL-BERT embeddings of the nodes, \(W^{i}\) is the matrix of trainable weights in layer \(i\). The calculations required for this approach were extremely computationally expensive, so we were not able to train the model on the entire training set on a V100 server. We used half of the training documents for graph building and the prediction of labels. However, the LEGAL-BERT embeddings were generated by fine-tuning the model on all training documents. ### Context-Based LEGAL-BERT Our final approach was a Context-Based LEGAL-BERT. We cleaned each sentence by removing all stopwords (such as 'a', 'an', 'the') present using the NLTK library. Then we created a 5 sentence input corresponding to any given input by concatenating its two preceeding sentences and its two succeeding sentences in order. These 5 sentences were separated using LEGAL-BERT's separator token </s>. Sentences at the beginning or end of a document were padded using a string of <pad> tokens. These 5 sentence inputs were then tokenized using LEGAL-BERT's tokenizer and fed into the model using the baseline parameters. We used the default classifier to perform classification on these context-based inputs. ## 5 Results We trained the models and tested them on the validation set. The accuracy scores have been reported in Table 2. We see that the performance of these models is significantly better than the previous attempts at this problem. The improvement of the results of previously studied models can be attributed to the increase in dataset size, along with other changes in the structure of the task. However, our Context-based LEGAL-BERT approach outperforms the other frameworks by a significant margin. This exhibits that the context of each sentence is critically important in determining its label, and that we are successful in incorporating the context of each sentence into its representation. We saw that graph-based approaches did not significantly improve performance compared to the current state-of-the-art models. However, it is important to note that we were unable to run the Graph Convolution Network using the entire train dataset due to compute constraints. Despite such constraints, there might be other reasons for the mediocre performance of graph-based models. One possible reason is that the representation of the sentences used for building the model was not able to capture information necessary to make better predictions. This also explains how the Context-based LEGAL-BERT performed so much better - it improved the quality of sentence representation, successfully capturing a wider range of features pertaining to the task at hand. ## 6 Conclusion and Future Work In this paper, we tried several different techniques to perform a sentence classification task on legal documents. Through our experiments, we show that incorporating context into the CLS tokens of sentences offers a significant improvement of 5.5 percentage points over LEGAL-BERT. Moreover, through our experiments on graph-based models, we show that improving the CLS tokens results in a better classification, compared to the regular CLS tokens used in a variety of different ways. The Context-based LEGAL-BERT model was not only more accurate but also less resource intensive. For future improvements on these models, we could try the Graph Convolutional Network approach on the complete dataset. We could also try the various methods of classification, such as a custom neural network or label diffusion, on the context-based CLS tokens. Moreover, we could further try to incorporate more sentences for context of each target sentence. This would require the use of a long-former model, since the total number of tokens passed into the model will increases.
2305.06703
Neural Fine-Gray: Monotonic neural networks for competing risks
Time-to-event modelling, known as survival analysis, differs from standard regression as it addresses censoring in patients who do not experience the event of interest. Despite competitive performances in tackling this problem, machine learning methods often ignore other competing risks that preclude the event of interest. This practice biases the survival estimation. Extensions to address this challenge often rely on parametric assumptions or numerical estimations leading to sub-optimal survival approximations. This paper leverages constrained monotonic neural networks to model each competing survival distribution. This modelling choice ensures the exact likelihood maximisation at a reduced computational cost by using automatic differentiation. The effectiveness of the solution is demonstrated on one synthetic and three medical datasets. Finally, we discuss the implications of considering competing risks when developing risk scores for medical practice.
Vincent Jeanselme, Chang Ho Yoon, Brian Tom, Jessica Barrett
2023-05-11T10:27:59Z
http://arxiv.org/abs/2305.06703v1
# Neural Fine-Gray: ###### Abstract Time-to-event modelling, known as survival analysis, differs from standard regression as it addresses _censoring_ in patients who do not experience the event of interest. Despite competitive performances in tackling this problem, machine learning methods often ignore other _competing risks_ that preclude the event of interest. This practice biases the survival estimation. Extensions to address this challenge often rely on parametric assumptions or numerical estimations leading to sub-optimal survival approximations. This paper leverages constrained monotonic neural networks to model each competing survival distribution. This modelling choice ensures the exact likelihood maximisation at a reduced computational cost by using automatic differentiation. The effectiveness of the solution is demonstrated on one synthetic and three medical datasets. Finally, we discuss the implications of considering competing risks when developing risk scores for medical practice. + Footnote †: 5. [https://github.com/Jeanselme/NeuralFineGray](https://github.com/Jeanselme/NeuralFineGray) + Footnote †: 5. [https://github.com/Jeanselme/NeuralFineGray](https://github.com/Jeanselme/NeuralFineGray) Data and Code AvailabilityExperiments are performed on publicly available datasets: Primary Biliary Cholangitis1(Therneau et al., 2000), Framingham2(Kannel and McGee, 1979), Synthetic3(Lee et al., 2018), and the Surveillance, Epidemiology, and End Results Program4. The code to reproduce the proposed model and the presented results is available on GitHub5. Footnote 1: Available in the R survival package. Footnote 2: Available in the R riskCommunicator package. Footnote 3: Available at [https://github.com/chl8856/DeepHit](https://github.com/chl8856/DeepHit) Footnote 4: Available at [https://seer.cancer.gov/](https://seer.cancer.gov/) Institutional Review Board (IRB)This research does not require IRB approval as it relies on publicly available datasets from previous studies. ## 1 Introduction ### Motivation Survival analysis involves modelling the time to an event of interest, which plays a critical role in medicine to understand disease manifestation, treatment outcomes, and the influence of different risk factors on patient health (Selvin, 2008). This analysis differs from standard regression settings as patients may not experience the outcome of interest over the study period. These _censored_ patients inform this regression as they participate in the study event-free until exiting the study. Multiple approaches have been proposed to take advantage of these patients by maximising the likelihood of the observed data. Often, in medical data, patients may experience events, known as _competing risks_, that preclude the observation of the event of interest. For instance, in modelling the time to cardiac events, patients who die from another condition during the observation period exit the study because of a competing risk. Competing risks remain overlooked despite their prevalence in medicine (Koller et al., 2012; Austin et al., 2016). Particularly, practitioners frequently consider competing risks as censoring (Austin and Fine, 2017). This practice breaks the common assumption of non-informative censoring, i.e., censored patients must leave the study for reasons independent of the outcome of interest. Considering competing risks as censoring, therefore, results in misestimating the risk of the event of interest (Fisher and Kanarek, 1974; Leung et al., 1997). To better tackle the problem of competing risks, one can explicitly model them through the marginal probability of observing each risk, known as the Cumulative Incidence Function (CIF). Estimation of these functions often relies on proportional hazards, parametric assumptions, or numerical integration, potentially resulting in the optimisation of a sub-optimal target misrepresenting the true underlying survival distribution. ### Contribution This work introduces a novel machine learning model to tackle the problem of competing risks. This approach generalises Rindt et al. (2022) to competing risks, leveraging monotonic neural networks to model cumulative incidence functions. The proposed method tackles the limitations of existing strategies by an exact computation of the likelihood at a lower computational cost. First, we explore the existing literature before introducing in detail our proposed model. Subsequently, we demonstrate the advantages and limitations of our approach as applied to one synthetic and three real-world medical datasets. Finally, we further investigate the Framingham dataset to underline the importance of considering competing risks in cardiovascular disease risk estimation. ## 2 Related work This section summarises the recent progress in machine learning for survival analysis. ### Time-to-event modelling Survival analysis is an active field of research in the statistical community (Kartsonaki, 2016). Non-parametric (Ishwaran et al., 2008) and parametric (Cox, 2008; Royston, 2001; Cox et al., 2007) models have been introduced to model survival outcomes. Despite these multiple alternatives and considerable proposed extensions, the original Cox proportional-hazards model (Cox, 1972) remains widely used in the medical literature (Stensrud and Hernan, 2020). This semi-parametric approach estimates the impact of covariates on the instantaneous risk of observing an event, i.e., hazard. The model assumes the hazard to take the form of the product of a non-parametric estimate of the population survival and a parametric covariate effect. This assumption is known as proportional hazards and renders tractable the model optimisation for covariate effect estimation. The machine learning community has extended the Cox model for unknown parametric forms of covariate effect. Specifically, DeepSurv (Katzman et al., 2018) replaces this otherwise parametric component with a neural network. However, this model still assumes proportional hazards that may not hold in real-world medical settings (Stensrud and Hernan, 2020). To relax this assumption, DeepCox (Nagpal et al., 2021) identifies subgroups using independent Cox models. Each subgroup is characterised by its own non-parametric baseline and covariate effect. At the intersection between DeepCox and parametric models, Nagpal et al. (2021) model each subgroup with a Weibull distribution parameterised by neural networks to allow end-to-end training. Jeanselme et al. (2022) abandon the parametric and proportional hazards assumption with unconstrained distributions learnt through monotonic networks. With a focus on predictive performance, DeepHit (Lee et al., 2018) approaches survival as a classification problem where survival prediction time is discretised. The associated task is to predict the interval at which a patient experiences the event. The model's training procedure consists of a likelihood and a ranking penalty which favours temporally coherent predictions. Extrapolation of this model to infinite time discretisa tion resembles an ordinary differential equation (ODE), as proposed in Danks and Yau (2022). The models above approximate the underlying survival likelihood either through parametric assumptions, discretisation or numerical integration. Recently, Rindt et al. (2022) proposed to overcome this challenge of likelihood estimation by deploying a constrained neural network with a monotonically increasing outcome to obtain the survival function, and, therefore, the exact likelihood. In addition, to show improved performance, the authors demonstrate that one should prefer likelihood optimisation over discriminative performance as the optimal likelihood is obtained for the true underlying survival distribution, i.e., the likelihood is a proper scoring rule. Our study is a generalisation of this work to competing risks, harnessing monotonic neural networks to directly model CIFs. ### Modelling competing risks Using the aforementioned models without consideration of competing risks would lead to a mis-estimation of the risk associated with the event of interest (Schuster et al., 2020). To tackle this issue, one can independently estimate each competing-risk-specific model and combine them to estimate the risk associated with a specific outcome given the non-observation of the other risks, as formulated in the cause-specific Cox model (Prentice et al., 1978). This independent estimation describes how covariates impact each event risk (Austin and Fine, 2017) but may misrepresent the relative effect of these covariates on outcomes (Austin et al., 2016) and lead to sub-optimal predictive performance. Alternatively, Fine and Gray (1999) propose to model the sub-hazards, i.e., the probability of observing a given event if the patient has not experienced this event until \(t\), under an assumption of proportionality analogous to the one made in the Cox proportional-hazards model. While providing insights into the link between covariates and risk particularly suitable for prediction (Austin and Fine, 2017), this model suffers from two shortcomings: (i) the proportionality assumption impairs its real-world applicability; (ii) this approach can result in an ill-defined survival function (Austin et al., 2021). Machine learning approaches have been extended to jointly model competing risks. DeepHit's time-discretisation results in a straightforward extension in which the output dimension is multiplied by the number of risks (Lee et al., 2018). Similarly, hierarchical discretisation (Tjandra et al., 2021) has been proposed. As parametric distributional assumptions result in a closed-form likelihood, Nagpal et al. (2021) propose to extend their mixture of Weibull distributions and Bellot and Schaar (2018) introduce a Bayesian mixture of Generalised Gamma distributions to tackle competing risk. Under more complex non-parametric likelihoods, numerical integration (Danks and Yau, 2022; Aastha and Liu, 2020) and pseudo-value approximations (Rahman et al., 2021) have been proposed. Finally, non-likelihood-based approaches have been introduced such as boosted trees (Bellot and van der Schaar, 2018) or survival trees (Schmid and Berger, 2021). However, these methods are optimised towards a Brier-score-like loss. While survival analysis has received considerable attention in the machine learning community, the problem of competing risks is less well studied (Wang et al., 2019) and even less applied (Monterrubio-Gomez et al., 2022), despite being central to medical applications. The existing methodologies to tackle competing risks rely on parametric assumptions, likelihood approximation, or optimise for a score that may misrepresent the true underlying survival distribution. This paper offers a novel competing risk model relying on constrained networks to obtain CIFs as a derivative instead of an integral. This approach results in the exact maximisation of the likelihood by leveraging automatic differentiation. ## 3 Proposed approach This section formalises the problem of survival analysis and introduces the proposed model. ### Notation We model a population of the form \(\{x_{i},t_{i},d_{i}\}_{i}\) with \(x_{i}\) the covariates for patient \(i\), \(t_{i}\in\mathbb{R}^{+}\) the time of end of follow-up and \(d_{i}\in\llbracket 0,R\rrbracket\) its associated cause. If \(d_{i}\in\llbracket 1,R\rrbracket\), the patient left the study due to one of the \(R\) considered risks. Otherwise, the patient is right-censored, i.e., the patient left the study for an _unrelated_ reason before any of the events of interest were observed. In this work, we focus on right-censoring, but the model can easily be extended to left-censoring. Note that we assume that experiencing one event precludes the observation of any other. ### Survival quantities Single risk.In settings with no competing risk, i.e., \(R=1\), one aims to estimate the _survival function_\(S\), the probability of not observing the event of interest before time \(t\), i.e.: \[S(t|x):=\mathbb{P}(T\geq t|x)\] Equivalently, one aims to estimate the _cumulative hazard function_\(\Lambda(t|x)\) related to \(S\) as follows: \[S(t|x):=\exp\left[-\Lambda(t|x)\right]=\exp\left[-\int_{0}^{t}\lambda(u|x)du\right]\] where \(\lambda(t|x)=\lim_{\delta t\to 0}\frac{\mathbb{P}(t<T<t+\delta t, \text{risk}=r|T\geq t,\,x)}{\delta t}\) is the instantaneous hazard of observing the event of interest, assuming no previous event(s). Estimating this quantity may rely on maximising the likelihood of the observed data. The assumption of non-informative censoring, i.e., event and censoring times are independent given the covariates, is necessary to express the likelihood. Specifically, each patient \(i\) with an observed event contributes to the likelihood, the probability of experiencing the event at \(t_{i}\) without previous events, i.e., \(\lambda(t_{i}|x_{i})S(t_{i}|x_{i})\). The likelihood associated with each censored patient is the probability of not experiencing the event until \(t_{i}\), i.e., \(S(t_{i}|x_{i})\). This results in the following log-likelihood: \[l=\sum_{i,d_{i}\neq 0}\log\lambda(t_{i}|x_{i})-\sum_{i}\Lambda(t_{i}|x_{i}) \tag{1}\] Competing risks.In the context of competing risks \(R>1\), a patient may leave a study for reasons correlated with the event of interest. Practitioners often consider these events as censoring and rely on single-risk models. However, this practice breaks the common assumption of non-informative censoring and results in misestimation of the survival function. When other events may be observed, \(S(t\mid x)\) is defined as the probability of observing none of the competing risks before time \(t\), i.e.: \[S(t|x)=1-\sum_{r\in\llbracket 1,R\rrbracket}F_{r}(t|x)\] where \(F_{r}\), the _Cumulative Incidence Function_ (CIF) for the event \(r\) denotes the probability of observing the event \(r\) before time \(t\) without prior occurrence of any competing event(s), i.e.: \[F_{r}(t|x)=\mathbb{P}(T<t,\text{risk}=r|x) \tag{2}\] with \(T\), the random variable denoting the time of observation of any event. Note that the CIF can be expressed as an integral of observing the event in an infinitesimal interval given that no other event was observed until \(t\): \[F_{r}(t|x)=\int_{0}^{t}\lambda_{r}(u|x)e^{-\int_{0}^{t}\sum_{r}\lambda_{r}(s) ds}du \tag{3}\] with \(\lambda_{r}(t|x)=\lim_{\delta t\to 0}\frac{\mathbb{P}(t<T<t+\delta t,\,\text{risk}=r|T \geq t,\,x)}{\delta t}\), the cause-specific hazard, i.e., the instantaneous risk of observing the event \(r\), with no other previous event. A final quantity of interest is the cause-specific survival \(S_{r}(t|x)\) that expresses the probability of not observing a given outcome \(r\) by time \(t\), i.e., \[S_{r}(t|x) =\mathbb{P}((T\geq t)\;\cup\;(T<t,\text{risk}\neq r)|x)\] \[=1-F_{r}(t|x)\] Similar to the single-risk settings, we maximise the likelihood to estimate \(F_{r}\). Importantly, we assume non-informative censoring _once controlled_ on all identified competing risks. While this assumption is more likely to hold once all competing risks are accounted for, practitioners suspecting its implausibility should perform sensitivity analysis for this assumption (Jackson et al., 2014). Under this assumption, the likelihood can be expressed analogously to (1): patients with an observed event contribute to the likelihood as the probability of observing the event \(d_{i}\) at \(t_{i}\) without observing any events until \(t_{i}\), i.e., \(\lambda_{r}(t_{i}|x_{i})S(t_{i}|x_{i})\). This quantity is the partial derivative of \(F_{r}\) with respect to \(t\) evaluated at \(t_{i}\). Remaining censored patients influence the likelihood as the probability of observing no event until \(t_{i}\), i.e., \(S(t_{i}|x_{i})\). The competing risks log-likelihood can, therefore, be expressed as: \[l=\sum_{r\in[\![1,R]\!]}\sum_{i,d_{i}=r}\log\frac{\partial F_{r}(u|x_ {i})}{\partial u}\bigg{|}_{u=t_{i}} \tag{4}\] \[+\sum_{i,d_{i}=0}\log[1-\sum_{r}F_{r}(t_{i}|x_{i})]\] One may extend existing models to the competing risks setting by performing the integration in (3). For instance, the cause-specific Cox model (Prentice et al., 1978) consists of Cox models independently trained on each risk, i.e., treating all other outcomes as censored. Then one evaluates the CIF through (3) using the estimated hazards. However, this staged modelling does not jointly consider the outcomes and may misestimate the covariate effects (Van Der Pas et al., 2018). Fine-Gray (Fine and Gray, 1999) overcomes this issue by directly modelling the sub-distribution hazards \(h_{r}(t|x)=\lim_{\delta t\to 0}\frac{\mathbb{P}(t<T<t+\delta t,\,\text{risk}=r|(T \geq t)\cup(T<t\cap\,\text{risk}\neq r),\,x)}{\delta t}\), relying on a proportionality assumption of these quantities. Likewise, one can extend machine learning architecture to enable the integration of the CIF and maximise the associated likelihood in (4). However, in the absence of a closed-form expression, this would necessitate numerical integration. This approximation may impact performance with added computational costs for training and predictions. Integration is computationally expensive, whereas derivation can be computed exactly in one backward pass by automatic differentiation - available in most machine learning libraries. Therefore, our approach reduces the computational cost of the likelihood estimation by modelling \(F_{r}\) and differentiating it to obtain \(\lambda_{r}S\), resulting in the exact computation of all the previously described quantities of interest. ### Architecture Neural Fine-Gray, illustrated in Figure 1, aims to model \([F_{r}]_{r\in[\![1,R]\!]}\) without relying on numerical integration to tackle the problem of competing risks. We decompose \(F_{r}\) as: \[F_{r}(t|x) =\mathbb{P}(\text{risk }=r|x)\cdot\mathbb{P}(T\leq t|\text{risk }=r,x)\] \[=B(E(x))_{r}\cdot[1-\exp(-t\times M_{r}(t,E(x)))]\] Embedding network (\(E\)).A first multi-layer perceptron \(E\) with inter-layer dropout extracts an embedding \(\tilde{x}\) from the covariates \(x\). Sub-distribution networks (\([M_{r}]_{r\in[\![1,R]\!]}\)).The embedding \(\tilde{x}\) is inputted in \(R\) positive monotonic networks \([M_{r}]_{r\in[\![1,R]\!]}\) representing a lifetime distribution conditioned on one risk \(r\), through the relation \(1-\exp(-t\times M_{r}(t,\tilde{x}))=\mathbb{P}(T\leq t|x,\text{risk }=r)\). A _positive monotonic neural network_ is a network constrained to have its outcome monotonic and positive given its input (see Daniels and Velikova (2010) for theoretical analysis and Lang (2005) for proof of universal approximator). Enforcing these constraints may rely on different transformations of the neural networks' weights (Omi Figure 1: Neural Survival Analysis Architecture. \(E\)_embeds the covariate(s) \(x\), which are then inputted in the monotonic networks \(M\) and balancing network \(B\) to estimate the CIFs._ et al., 2019; Rindt et al., 2022; Chilinski and Silva, 2020). In our work, we enforce all the neural networks' weights to be positive through a square function and use a final _SoftPlus_ layer to fulfil these constraints. Enforcing positive weights ensures that the outcome increases with the time dimension \(t\). Additionally, enforcing a smooth function ensures a low computational cost and stable optimisation. Note that for model flexibility, we used \(R\) monotonic networks. We explore in Appendix B how using one network with \(R\) outcomes would impact performance. Balancing network (\(B\)).A multi-layer perceptron \(B\) with a final _SoftMax_ layer leverages \(\tilde{x}\) to balance the probability of observing each risk \(B(\tilde{x}):=[\mathbb{P}(\text{risk }=r|x)]_{r}\). This weighting ensures that the survival function is correctly specified, i.e., \(\sum_{r\in[\![1,R]\!]}F_{r}(t|x)\leq 1\). The proposed approach directly models \(F_{r}\) by multiplying the outputs of the distribution and balancing networks. Automatic differentiation of the model's output results in the derivative \(\frac{\partial F_{r}(u|x_{i})}{\partial u}\bigg{|}_{u=t_{i}}\). The model can then be trained end-to-end by maximising the _exact_ log-likelihood proposed in Equation (4). By jointly modelling the competing risks, this proposed model is reminiscent of the Fine-Gray approach. The following equation exhibits the link between sub-distribution hazards and CIFs, i.e., between the standard and neural Fine-Gray models: \[h_{r}(t|x)=\frac{1}{1-F_{r}(t|x)}\cdot\frac{\partial F_{r}(u|x)}{\partial u} \bigg{|}_{u=t}\] **Remark 1**: _Shchur et al. (2020) raise a limitation of monotonic neural networks that may attribute non-null density to negative times, i.e., \(F_{r}(t=0|x)\neq 0\). In contrast to Omi et al. (2019); Rindt et al. (2022), we model \(\mathbb{P}(T\leq t|\text{risk }=r,x)\) as \(1-\exp(-t\times M_{r}(t,\tilde{x}))\) instead of \(M_{r}(t,\tilde{x})\) to address this issue._ **Remark 2**: _The proposed methodology is a generalisation of the survival model SumoNet (Rindt et al., 2022) that estimates \(S\) in the single-risk setting. If \(R=1\), then \(F_{r}=1-S\) and \(B_{r}=1\). In this context, the proposed approach results in Sumo-Net. Moreover, the architecture resembles the one proposed in DeSurv (Danks and Yau, 2022) while avoiding numerical integration._ ### Computational complexity Our modelling choices result in the exact computation of the likelihood. However, the other methodologies relying on integral approximation and outcome discretisation converge towards \(F_{r}\) in the upper limit, i.e., when increasing the number of point estimates, or using a finer discretisation. One may therefore question the advantage of the proposed methodology. In this section, we compare the complexity in estimating the CIF and likelihood for DeSurv (Danks and Yau, 2022), the closest method to our proposed model, and NeuralFG. DeSurv(Danks and Yau, 2022).This approach models \(F_{r}(t|x)\) as \(\text{Tanh}(v(x,t))\) with \(v\) being the solution to the ODE defined as \(\left.\frac{\partial v(x,u)}{\partial u}\right|_{u=t}=g(x,t)\) and \(v(x,0)=0\) with \(g\), a neural network. For efficiency, the authors propose a Gauss-Legendre quadrature to solve the ODE and obtain \(v\). This approximation necessitates \(n\) evaluations of \(g\) at defined times \([t_{j}(t)]_{j\in[\![1,n]\!]}\) weighted by the associated \([w_{j}]_{j\in[\![1,n]\!]}\) (see Press et al. (2007) for a detailed description of Gauss-Legendre quadrature). Each forward pass estimates \(\left.\frac{\partial v(x,u)}{\partial u}\right|_{u=t_{j}(t)}\) at the points used to approximate the integral, then \[\hat{F}_{r}(t|x)=\text{Tanh}\left(\frac{t}{2}\sum_{j\in[\![1,n]\!]}w_{j}g(x, \frac{t}{2}t_{j}(t))\right)\] DeSurv's computational cost.Computation of \(F_{r}\) relies on \(n\) forward passes through the network. Moreover, the estimation of \(\left.\frac{\partial\hat{F}_{r}(u|x_{i})}{\partial u}\right|_{u=t_{i}}\) necessary to compute the competing risk likelihood is \(g(x,t_{i})(1-\text{Tanh}(\hat{F}_{r}(t_{i}|x))^{2})\), i.e., \(n+1\) forward passes. The likelihood has a \(\mathcal{O}(nN)\) computational complexity with \(N\) the number of patients in the study. NeuralFG's computational cost.\(F_{r}\) is estimated in one forward pass and \(\left.\frac{\partial\hat{F}_{r}(u|x_{i})}{\partial u}\right|_{u=t_{i}}\) in one backward pass. Assuming the same computational cost for forward and backward passes, the likelihood estimation has a \(\mathcal{O}(2N)\) complexity. Our proposed methodology, therefore, presents more than an \(n/2\) computational gain compared to DeSurv in estimating the likelihood used for training, and an \(n\) gain in inferring \(F_{r}\). ## 4 Experiments This section introduces the datasets and experimental settings. ### Datasets We explore the model performance on four datasets with competing risks: * PBC (Therneau et al., 2000) comprises 25 covariates in 312 patients over a 10-year randomised control trial to measure the impact of D-penicillamine on Primary Biliary Cholangitis (PBC). Death on the waiting list is the primary outcome with transplant being a competing risk. * Framingham (Kannel and McGee, 1979) is a cohort study gathering 18 longitudinal measurements on male patients over 20 years. Our analysis focuses on the first observed covariates of 4,434 patients to model cardiovascular disease (CVD) risk. Death from other causes is treated as a competing risk. * Synthetic (Lee et al., 2018), this dataset consists of 30,000 synthetic patients with 12 covariates following exponential event time distributions, non-linearly dependent on the covariates. * SEER6: the Surveillance, Epidemiology, and End Results Program gathers covariates and outcomes of patients diagnosed with breast cancer between 1992 and 2017. Following the preprocessing proposed by Lee et al. (2018); Danks and Yau (2022), we select 658,354 patients and 23 covariates describing the patient demographics and disease characteristics at diagnosis. Death from breast cancer (BC) is our primary outcome, with CVD, a competing risk. Footnote 6: [https://seer.cancer.gov/](https://seer.cancer.gov/) Table 1 summarises the datasets' characteristics with the respective proportion of outcome and censoring. ### Baseline models The proposed Neural Fine-Gray (**NeuralFG**) was compared against six strategies. First, we considered the well-established cause-specific Cox model (**CS Cox** Prentice et al. (1978)) and **Fine-Gray** model (Fine and Gray, 1999) with a linear parametric form for the covariate effect. The cause-specific Cox model models each cause independently using a Cox proportional-hazards model, while Fine-Gray models the sub-hazard functions assuming proportional sub-hazards. Thereafter, we compare state-of-the-art competing risk survival neural networks proposed in the machine learning literature. First, Deep Survival Machine (**DSM**, Nagpal et al. (2021)) consists of a mixture of Weibull distributions parameterised by neural networks. Each point is then assigned to these distributions through an assignment network. Using parametric distributions results in a closed-form likelihood in the competing risks setting. **DeepHit**(Lee et al., 2018) discretises the survival horizon and leverages a multi-head network to associate each patient to the interval corresponding to its observed event time and type. Each head of the network is associated with one cause as in the proposed NeuralFG. The time-discretisation results in a discrete likelihood further penalised by a C-index-like regu \begin{table} \begin{tabular}{c|c c c c c} Dataset & Observations & Features & Primary & Competing risk & Censored \\ \hline PBC & 312 & 25 & Death (44.87 \% ) & Transplant (9.29 \%) & 45.83 \% \\ Framingham & 4,434 & 18 & CVD (26.09 \%) & Death (17.75 \%) & 56.16 \% \\ Synthetic & 30,000 & 12 & * (25.33 \%) & * (24.67 \%) & 50.00 \% \\ SEER & 658,354 & 23 & BC (16.51 \%) & CVD (5.69 \%) & 77.80 \% \\ \end{tabular} \end{table} Table 1: Datasets characteristics larisation for model training. Closer to our work, **DeSurv**(Danks and Yau, 2022) approaches \(F_{r}\) as the solution to an ODE. ### Experimental settings The analysis relies on 5-fold cross-validation with \(10\%\) of each training set left aside for hyper-parameter tuning. Random search is used on the following grid over \(100\) iterations: learning rate (\(10^{-3}\) or \(10^{-4}\)), batch size (\(100\), \(250\), except for SEER: \(1,000\) or \(5,000\)), dropout rate (\(0\), \(0.25\), \(0.5\) or \(0.75\)), number of layers (\([1,4]\)) and nodes (\(25\) or \(50\)). All activation functions are fixed to _Tanh_ to ensure a properly defined derivative - note that any \(\mathcal{C}^{1}\) activation would work. All models are optimised using an Adam optimiser (Kingma and Ba, 2015) over \(1,000\) epochs, with an early stopping criterion computed on a \(10\%\) left-aside subset of the training set. Other methods are optimised over the same grid (if applicable). Additionally, we explore both Log-Normal and Weibull distributions for DSM and use \(10,000\) warm-up iterations to estimate the parametric form closest to the average survival as proposed in the original paper (Nagpal et al., 2021). For DeSurv, we followed the original paper's recommendation of a 15-degree Gauss-Legendre quadrature to estimate the CIFs. In Appendix C.1, we further investigate how increasing the number of point estimates impacts performance. We use a similar approximation for DeepHit with a 15-split time discretisation. Finally, for a fair compari \begin{table} \begin{tabular}{c|c|c c c||c c c} \multirow{3}{*}{**\(\mathcal{C}^{1}\)**} & \multirow{3}{*}{**Model**} & \multicolumn{3}{c}{C-Index _(Larger is better)_} & \multicolumn{3}{c}{Brier Score _(Smaller is better)_} \\ & & & \(q_{0.25}\) & \(q_{0.50}\) & \(q_{0.75}\) & \(q_{0.25}\) & \(q_{0.50}\) & \(q_{0.75}\) \\ \hline \multirow{5}{*}{**\(\mathcal{C}^{1}\)**} & \multirow{5}{*}{**Model**} & **NeuralFG** & 0.810 (0.079) & 0.795 (0.114) & 0.762 (0.123) & 0.099 (0.028) & 0.140 (0.020) & 0.169 (0.050) \\ & & DeepHit & 0.822 (0.099) & 0.844 (0.036) & 0.782 (0.033) & _0.090_ (0.030) & 0.132 (0.013) & 0.180 (0.021) \\ & & DeSurv & 0.821 (0.089) & 0.837 (0.050) & 0.815 (0.068) & **0.088** (0.022) & 0.113 (0.011) & **0.136** (0.047) \\ & & DSM & **0.867** (0.065) & **0.864** (0.037) & **0.828** (0.052) & 0.091 (0.039) & 0.124 (0.015) & 0.161 (0.022) \\ & & Fine-Gray & 0.831 (0.136) & _0.852_ (0.045) & _0.816_ (0.059) & 0.091 (0.042) & _0.103_ (0.009) & 0.150 (0.038) \\ & & CS Cox & _0.833_ (0.125) & 0.851 (0.040) & 0.811 (0.065) & 0.091 (0.038) & **0.102** (0.008) & _0.148_ (0.038) \\ \hline \multirow{5}{*}{**\(\mathcal{C}^{2}\)**} & \multirow{5}{*}{**Model**} & **NeuralFG** & **0.872** (0.024) & **0.812** (0.029) & **0.782** (0.018) & _0.050_ (0.003) & **0.095** (0.010) & **0.128** (0.004) \\ & & DeepHit & 0.855 (0.026) & 0.781 (0.026) & 0.743 (0.014) & 0.053 (0.003) & 0.102 (0.007) & 0.141 (0.002) \\ & & DeSurv & **0.872** (0.027) & _0.807_ (0.031) & 0.775 (0.022) & **0.049** (0.005) & **0.095** (0.009) & _0.129_ (0.003) \\ & & DSM & _0.866_ (0.023) & 0.806 (0.023) & _0.778_ (0.014) & 0.057 (0.005) & 0.104 (0.006) & 0.141 (0.002) \\ & & Fine-Gray & 0.842 (0.025) & 0.794 (0.024) & 0.772 (0.015) & 0.057 (0.006) & 0.099 (0.007) & 0.131 (0.003) \\ & & CS Cox & 0.845 (0.020) & 0.798 (0.022) & 0.774 (0.015) & 0.056 (0.006) & _0.098_ (0.007) & 0.131 (0.003) \\ \hline \multirow{5}{*}{**\(\mathcal{C}^{3}\)**} & \multirow{5}{*}{**Model**} & **NeuralFG** & _0.791_ (0.013) & _0.754_ (0.013) & **0.715** (0.011) & **0.068** (0.003) & _0.125_ (0.004) & **0.192** (0.005) \\ & & DeepHit & 0.783 (0.012) & 0.747 (0.013) & _0.714_ (0.008) & 0.079 (0.003) & 0.136 (0.002) & _0.212_ (0.003) \\ & & DeSurv & **0.793** (0.013) & **0.756** (0.014) & _0.714_ (0.014) & **0.068** (0.002) & **0.124** (0.004) & **0.192** (0.004) \\ & & DSM & 0.776 (0.013) & 0.742 (0.013) & 0.710 (0.013) & _0.073_ (0.002) & 0.139 (0.002) & 0.220 (0.003) \\ & & Fine-Gray & 0.611 (0.014) & 0.587 (0.007) & 0.568 (0.009) & 0.078 (0.002) & 0.159 (0.003) & 0.241 (0.002) \\ & & CS Cox & 0.609 (0.015) & 0.586 (0.006) & 0.568 (0.009) & 0.078 (0.002) & 0.159 (0.003) & 0.240 (0.002) \\ \hline \multirow{5}{*}{**\(\mathcal{C}^{1}\)**} & \multirow{5}{*}{**Model**} & **NeuralFG** & _0.893_ (0.002) & _0.855_ (0.001) & _0.815_ (0.001) & **0.038** (0.000) & **0.069** (0.001) & **0.101** (0.000) \\ & & DeepHit & **0.899** (0.002) & **0.860** (0.001) & **0.818** (0.001) & **0.038** (0.000) & _0.070_ (0.000) & _0.102_ (0.001) \\ \cline{1-1} & & DeSurv & 0.892 (0.003) & 0.852 (0.002) & 0.813 (0.001) & **0.038** (0.000) & _0.070_ (0.000) & _0.102_ (0.001) \\ \cline{1-1} & & DSM & 0.884 (0.001) & 0.842 (0.002) & 0.805 (0.002) & _0.039_ (0.000) & 0.076 (0.001) & 0.112 (0.000) \\ \cline{1-1} & & Fine-Gray & 0.836 (0.003) & 0.786 (0.003) & 0.742 (0.002) & 0.043 (0.001) & 0.081 (0.000) & 0.118 (0.000) \\ \cline{1-1} & & CS Cox & 0.837 (0.003) & 0.786 (0.003) & 0.742 (0.002) & 0.042 (0.001) & 0.081 (0.000) & son, we double the number of possible layers for architectures without embedding networks. ### Evaluation metrics As per current practice in survival literature, we used the time-dependent Brier score (Graf et al., 1999) to quantify calibration, and the C-index (Antolini et al., 2005) for discrimination at the dataset-specific 0.25, 0.5 and 0.75 quantiles of the uncensored population event times (See Appendix A.1 for data characteristics, A.2 for further description of the metrics and A.4 for the cumulative version of these metrics). Means and standard deviations are computed over the 5 folds of cross-validation. ## 5 Results Table 2 summarises the calibration and discriminative performance of the analysed models on the primary outcome (see Appendix A.3 for the performances on the competing risk). ### Model's strengths NeuralFG demonstrates lower or equal Brier scores than other state-of-the-art machine learning models across the majority of datasets and time horizons. While DSM presents good discriminative performances, this edge is not reflected in its calibration. This observation indicates that parametric assumptions may result in estimated survival functions discriminative of the outcome but further from the underlying survival distribution. Deep-Hit penalisation results in better C-Index values but hurts model calibration, with misaligned discrimination and calibration throughout the different datasets. Finally, performances are comparable to DeSurv. However, DeSurv's likelihood approximation multiplies its computational cost by the numerical integration complexity (see Appendix C.2 for a comparison of training speed on the Framingham dataset). NeuralFG, therefore, achieves state-of-the-art performance while avoiding computationally-expensive approximations. ### Model's limitations The proposed methodology has lower performance on the PBC dataset, which notably comprises a limited amount of data. In small-data settings, practitioners should prefer simpler models to avoid overfitting. For instance, the linear Fine-Gray and CS Cox models result in competitive performances on PBC. However, this linearity assumption hurts performance under more complex covariate effects as in the SEER and Synthetic datasets. Note that leveraging domain expertise could enhance performance through the addition of interactions and the use of alternative models. However, these approaches deviate from the automated discovery of interactions facilitated by neural networks. Similarly, the parametric assumption of DSM results in the best discrimination in PBC, but it under-performs under more complex survival distributions. Furthermore, the DeSurv model performs better than the proposed methodology on PBC. This may reflect that approximating the likelihood can regularise model training, which is beneficial in the context of small data. \begin{table} \begin{tabular}{c|c|c c c||c c c} \multirow{2}{*}{Death} & \multirow{2}{*}{Model} & \multicolumn{4}{c}{C-Index _(Larger is better)_} & \multicolumn{4}{c}{Brier Score _(Smaller is better)_} \\ & & \(q_{0.25}\) & \(q_{0.50}\) & \(q_{0.75}\) & \(q_{0.25}\) & \(q_{0.50}\) & \(q_{0.75}\) \\ \hline \multirow{2}{*}{CVD} & **Competing** & **0.872** (0.024) & **0.812** (0.029) & **0.782** (0.018) & **0.050** (0.003) & **0.095** (0.010) & **0.128** (0.004) \\ & Non-Competing & 0.862 (0.029) & 0.807 (0.032) & 0.780 (0.020) & 0.053 (0.004) & 0.099 (0.011) & 0.129 (0.005) \\ \hline \multirow{2}{*}{Death} & **Competing** & **0.745** (0.055) & 0.717 (0.038) & 0.713 (0.022) & **0.027** (0.003) & **0.070** (0.004) & 0.112 (0.005) \\ & Non-Competing & 0.741 (0.053) & **0.718** (0.045) & **0.719** (0.025) & **0.027** (0.003) & 0.071 (0.002) & **0.109** (0.004) \\ \end{tabular} \end{table} Table 3: Modelling competing risk - means (standard deviations) across the 5-fold cross-validation. ### Modelling vs ignoring competing risks This last section explores the importance of modelling competing risks in the Framingham dataset. First, we present the performance differences between the proposed model in comparison to the same architecture maximising the cause-specific likelihoods. Then, we explore which subgroups of the population most benefit from this modelling. Finally, we study how guidelines would differ under the proposed NeuralFG and its non-competing alternative. Why account for competing risks?To measure how modelling competing risks impacts performance, while ensuring the _same number of parameters_, we propose to use the same architecture presented in Section 3.3 whilst maximising the sum of the cause-specific likelihoods, i.e.: \[l=\sum_{r}\left[\sum_{i,d_{i}=r}\log\lambda_{r}(t_{i}|\tilde{x}_{i})-\sum_{i} \Lambda_{r}(t_{i}|\tilde{x}_{i})\right]\] Each monotonic network, therefore, models the cumulative hazard function for risk \(r\), \(\Lambda_{r}\), by maximising the likelihood of one cause whilst considering the rest of the population as censored, relying on a shared embedding \(\tilde{x}\). Automatic differentiation outputs \([\lambda_{r}]_{r\in\llbracket 1,R\rrbracket}\). Table 3 summarises the discrimination and calibration differences in the non-competing survival \(e^{-\Lambda_{r}(t|x)}\) obtained with this model and the previously described NFG's cause-specific survival \(1-F_{r}(t|x)\). Note how modelling competing risks significantly improves performance for the primary outcome of interest, CVD, without significant differences for the competing risk. Since patients who die from other causes during the study period do not present the same risk of CVD as patients remaining in the study, not accounting for all-cause mortality results in a misestimation of CVD risk. Who may benefit?One can explore which subgroups benefit the most from modelling competing risks. Intuitively, patients who are the most likely to suffer from competing risks may benefit the most from this modelling. Table 4 illustrates this with older patients benefiting the most from modelling death as a competing risk. What is the impact on medical practice?The Framingham dataset was used to model the eponymous 10-year cardiovascular disease (CVD) risk score (Wilson et al., 1998). This score guides clinical practice in preventatively treating patients, usually with a combination of cholesterol-lowering therapy, e.g., statins, and holistic treatment of other CVD risk factors (Bosomworth, 2011). To minimise overtreatment and adverse side effects, accurate risk estimates are critical for targeting the population most at risk so as to maximise the benefit-risk ratio (Mangione et al., 2022). However, the original Framingham score relies on a non-competing risk model (Mangione et al., 2022; van Kempen et al., 2014). Clinical treatment often relies on a discretisation of this risk (Bosomworth, 2011): low, intermediate and high risk, at \(<10\%\), \(10-20\%\) and \(>20\%\) chance, respectively, of observing a CVD event in the following 10 years. Current guidelines in the United States suggest placing all patients with \(\geq 10\%\) risk on cholesterol-lowering drugs (Mangione et al., 2022). Furthermore, in the US alone, several million patients are on these medications (Wall et al., 2018). Therefore, even modest shifts in patient risk classification could, at scale, amount to considerable numbers either inappropriately receiving preventative treatment or inappropriately receiving none. To demonstrate how considering competing risks can fundamentally alter such risk profiling, we present in Table 5 the reclassification matrices of risk levels given competing and non-competing NeuralFG differentiated by observed outcomes for patients aged 50 or over. For instance, note that 251 deemed intermediate-to \begin{table} \begin{tabular}{c|c c c} Age & \multicolumn{3}{c}{Brier Score Difference} \\ & \(q_{0.25}\) & \(q_{0.50}\) & \(q_{0.75}\) \\ \hline \(<40\) & -0.000 (0.000) & -0.001 (0.002) & 0.000 (0.005) \\ 40-50 & -0.001 (0.001) & -0.002 (0.003) & -0.002 (0.001) \\ 50-60 & _-0.003_ (0.005) & _-0.004_ (0.003) & _-0.006_ (0.007) \\ 60+ & **-0.013** (0.011) **-0.022** **(0.018) **-0.007** (0.024) \\ \end{tabular} \end{table} Table 4: Calibration differences - Means and standard deviations over 5-fold cross-validation. _Larger negative values correspond to better calibration for the competing risk model._ high risk by the non-competing risks model are reclassified as lower risk by the competing-risks model, who, in turn, could have avoided the initiation of therapy. These results echo the medical literature's findings of risk misestimation due to the non-consideration of competing risks in this risk score (Lloyd-Jones et al., 2004; van Kempen et al., 2014). More accurate simulations to estimate the potential lives saved and harmed through such reclassification is beyond the scope of this article but could provide insight into the possible consequences of considering competing risks. In summary, using a non-competing risk score would have important clinical consequences of over- and under-treatment (Schuster et al., 2020). More predictive models accounting for competing risks must be preferred to ensure better care. ## 6 Conclusion This work provides a solution to address competing risks that preclude the observation of the outcome of interest, often present in medical applications. We introduce Neural Fine-Gray, a monotonic neural network architecture, to tackle the problem of competing risks in survival modelling. The model outputs the cumulative incidence functions and, consequently, allows the exact likelihood computation. Importantly, this architecture choice achieves competitive performance while avoiding the parametric assumptions or computationally expensive approximations made by state-of-the-art survival neural networks. Further analysis of the Framingham dataset contributes to the literature, inviting practitioners to use competing-risk modelling in risk score development for improved care (Abdel-Qadir et al., 2018; Austin et al., 2016; Lloyd-Jones et al., 2004; Schuster et al., 2020). Our future work will (i) extend this architecture to model other modalities such as time series as in Nagpal et al. (2021) and, (ii) explore medically interpretable survival clusters as presented in Jeanselme et al. (2022); Nagpal et al. (2022). ## Acknowledgments This work was supported by The Alan Turing Institute's Enrichment Scheme and partially funded by UKRI Medical Research Council (MC_UU_0002/5 and MC_UU_0002/2). \begin{table} \end{table} Table 5: Reclassification matrices between competing and non-competing risk scores for patients older than 50. _Red (resp. blue) shows when the competing risks score is less aligned with the 10-year observed outcome than the non-competing model (resp. more aligned). Note that censored patients are ignored._
2304.03552
A physics-informed neural network framework for modeling obstacle-related equations
Deep learning has been highly successful in some applications. Nevertheless, its use for solving partial differential equations (PDEs) has only been of recent interest with current state-of-the-art machine learning libraries, e.g., TensorFlow or PyTorch. Physics-informed neural networks (PINNs) are an attractive tool for solving partial differential equations based on sparse and noisy data. Here extend PINNs to solve obstacle-related PDEs which present a great computational challenge because they necessitate numerical methods that can yield an accurate approximation of the solution that lies above a given obstacle. The performance of the proposed PINNs is demonstrated in multiple scenarios for linear and nonlinear PDEs subject to regular and irregular obstacles.
Hamid El Bahja, Jan Christian Hauffen, Peter Jung, Bubacarr Bah, Issa Karambal
2023-04-07T09:22:28Z
http://arxiv.org/abs/2304.03552v1
# A physics-informed neural network framework for modeling obstacle-related equations ###### Abstract. Deep learning has been highly successful in some applications. Nevertheless, its use for solving partial differential equations (PDEs) has only been of recent interest with current state-of-the-art machine learning libraries, e.g., TensorFlow or PyTorch. Physics-informed neural networks (PINNs) are an attractive tool for solving partial differential equations based on sparse and noisy data. Here extend PINNs to solve obstacle-related PDEs which present a great computational challenge because they necessitate numerical methods that can yield an accurate approximation of the solution that lies above a given obstacle. The performance of the proposed PINNs is demonstrated in multiple scenarios for linear and nonlinear PDEs subject to regular and irregular obstacles. Key words and phrases:Physics-informed neural networks, Obstacle problems, Partial differential equations, Scientific machine learning 1991 Mathematics Subject Classification: 05B20, 05B20, 05B30 in [36, 12, 33] the authors proposed an efficient numerical scheme for solving some obstacle problems based on a reformulation of the obstacle in terms of \(L^{1}\) and \(L^{2}\)-like penalties on the variational problem. However, as expected the relaxed problem, using an \(L^{1}\)-penalty is non-differentiable, and the \(L^{2}\)-penalty which is parametrized with a coefficient that depends on a parameter \(\varepsilon\) requires that the parameter goes to infinity in order for the solution to be exact. Recently, in [9] the authors introduce a penalty method where a penalized weak formulation, in the sense of [36, 33], is minimized by using a deep neural network. Nevertheless, this method is tailored to each numerical example since the penalty parameters are tuned manually, which requires the knowledge of the exact solution a priori in order to have good approximations. For more numerical approaches see [43, 21, 26] and references therein. Despite its effectiveness, each of the previously mentioned methods has its own limitations such as lack of convergence, non-differentiability, and domain discretization dependency. Also, in most cases, these methods have to be specifically tailored to a given problem setup and cannot be easily adapted to build a general framework for seamlessly tackling problems involving various types of partial differential equations related to an obstacle constraint. Furthermore, to the best of our knowledge, none of these existing techniques can be easily applied to data assimilation problems where only a finite number of sparse measurements points within the domain of interest are available. To overcome such setbacks, we propose a NN approaches due to the high expressive power of NN in function approximation [30, 28], recent advances in parallelized hardware, automatic differentiation [6, 5], and stochastic optimization [11]. In particular, we concentrate on physics-informed neural networks (PINNs) [31, 10, 27, 22]. This method has been used for numerically computing the solution of Schrodinger, Allen-Cahn, and Navier-Stokes equations [32, 31]. It has also been used for the approximation of the solution of high-dimensional stochastic PDEs [17]. As pointed out in [31], this approach can be considered as a class of reinforcement learning [23], where the learning is on maximizing an incentive or minimizing a loss rather than direct training on data. If the network prediction does not satisfy a governing equation, it will result in an increase in the cost and therefore the learning traverses a path that should minimize that cost. So far, most of the cases considered in the previous references are related to problems where the latent solution of the PDE has no constraints and no inter-facial phenomena occur which is the case for obstacle problems. To our knowledge, physically informed neural networks applied to obstacle problems have not yet been studied specially and systematically. In this paper, by giving noisy measurements of the collocation points, we try to answer the following question: "What is the Physics-informed neural networks approach to finding the equilibrium position and contact area of an elastic membrane whose boundary is held fixed, and which is constrained to lie above a given obstacle?". To answer this question, we must solve the problem of computing data-driven solutions to the partial differential equations of the following general form \[\begin{cases}\mathcal{N}[u](x)=f(x)&\text{in }\Omega,\\ u(x)=g(x)&\text{on }\partial\Omega,\\ u(x)\geq\varphi(x)&\text{in }\Omega,\end{cases} \tag{1.1}\] where \(\mathcal{N}\) is a differential operator, \(u:\overline{\Omega}\longrightarrow\mathbb{R}\) is the latent solution, \(\Omega\) is a bounded domain with Lipschitz regular boundary in \(\mathbb{R}^{N}\), \(\overline{\Omega}\) and \(\partial\Omega\) are respectively the closure and the boundary of \(\Omega\), \(f\) and \(g\) are fixed values functions, and \(\varphi:\overline{\Omega}\longrightarrow\mathbb{R}^{N}\) is a given obstacle. In this regard, our specific contributions can be summarized as follows: * We present a PINNs framework that can be applied to various linear and nonlinear PDEs subject to regular and irregular obstacles. * We show the effectiveness of our PINNs throughout many numerical experiments in solving various types of obstacle problems. The paper is organized as follows. In Section 2, we give a mathematical and geometrical description of obstacle problems, In Section 3, we present a detailed description of our proposed PINNs. In Section 4, we demonstrate the effectiveness of our proposed PINNs through the lens of two representative case studies, including linear PDEs with regular and irregular obstacles, and nonlinear PDEs with regular and irregular obstacles. Finally, we conclude the paper in Section 5. ## 2. Mathematical overview In this section, we take \(\mathcal{N}[u]=\Delta u\) and \(f=0\). Therefore, (1.1) becomes a problem of minimization of the following energy functional \[F(u)=\int_{\Omega}|\nabla u|^{2}\ dx \tag{2.1}\] among all functions \(u\) satisfying \(u\geq\varphi\) in \(\Omega\), for a given obstacle \(\varphi\). The Euler-Lagrange equation of such a minimization problem is \[\begin{cases}u\geq\varphi&\text{in }\Omega,\\ \Delta u=0&\text{in }\{u\geq\varphi\},\\ -\Delta u\geq 0&\text{in }\Omega.\end{cases}\] In other words, the solution \(u\) is above the obstacle \(\varphi\), it is harmonic whenever it does not touch the obstacle, and it is superharmonic everywhere. The domain \(\Omega\) will be split into two regions: one in which the solution \(u\) is harmonic, and one in which the solution equals the obstacle. The latter region is known as the contact set \(\{u=\varphi\}\subset\Omega\). The interface that separates these two regions is the free boundary. For instance, in the context of financial mathematics, this type of problem appears as a model for pricing American options [29, 24]. The function \(\varphi\) represents the option's payoff, and the contact set is the exercise region. Notice that, in this context, the most important unknown to understand is the exercise region, i.e., one wants to find and/or understand the two regions \(\{x\in\Omega:u(x)=\varphi(x)\}\) in which we should exercise the option, and \(\{x\in\Omega:u(x)>\varphi(x)\}\) in which we should wait and not exercise the option yet. The free boundary is the separating interface between these two Figure 1. The contact set and the free boundary in the classical obstacle problem. regions. It is worth noting that several studies guarantee the existence and the uniqueness of the solution to obstacle problems, see for example [3, 7] and references therein. ## 3. Physics-informed neural networks (PINNs) In this section, we study Physics-informed neural networks of our obstacle problem. To this end, following the original work of Raissi et al [31], we assume that the solution \(u(x)\) of (1.1) can be approximated by using a feed-forward \(\alpha\)-layer neural network \(u(x;\theta)\) where \(\theta\) is a collection of all parameters in the network, such that \[u(x;\theta)=\Sigma^{\alpha}\circ\Sigma^{\alpha-1}\circ...\circ\Sigma^{1}(x),\] where \[\Sigma^{i}\left(z\right)=\sigma^{i}\left(W^{i}z+b^{i}\right),\text{ for }i=1,..,\alpha.\] In the above, the symbol \(\circ\) denotes composition of functions, \(i\) is the layer number, \(x\) is the input to the network, \(\Sigma^{\alpha}\) is the output layer of the network, and \(W^{i}\) and \(b^{i}\) are respectively the weight matrices and bias vectors of layer \(i\), all collected in \(\theta=\{W^{i},b^{i}\}_{i=1}^{\alpha}\). \(\sigma^{i}\) is the (point-wise) activation function \(i=1,..,\alpha-1\). In our numerical experiments below, for all the hidden layers, the hyperbolic-tangent function is used which is a preferable activation function due to its smoothness and non-zero derivative. Consequently, the parameters \(\theta\) of \(u(x,\theta)\) can be learned by minimizing the following composite loss function \[\mathcal{L}(\theta)=\mathcal{L}_{r}(\theta)+\mathcal{L}_{b}(\theta), \tag{3.1}\] where \[\mathcal{L}_{r}(\theta) =\frac{1}{N_{r}}\sum_{i=1}^{N_{r}}\left|H(u(x_{r}^{i};\theta)- \varphi(x_{r}^{i}))\cdot R(x_{r}^{i};\theta)+\text{ReLu}(\varphi(x_{r}^{i})-u (x_{r}^{i};\theta))\right|^{2}, \tag{3.3}\] \[\mathcal{L}_{b}(\theta) =\frac{\lambda_{b}}{N_{b}}\sum_{i=1}^{N_{b}}\left|u(x_{b}^{i}, \theta)-g(x_{b}^{i})\right|^{2}, \tag{3.2}\] Figure 2. A schematic of network architecture used for constructing a PINNs-based obstacle model. such that \(R(x,\theta)\) is the PDE residual of (1.1) defined as \[R(x;\theta)=\mathcal{N}[u](x,\theta)-f(x),\] \(H\) is the Heaviside step function defined as \[H(x)=\begin{cases}1&\text{ if }x\geq 0,\\ 0&\text{ otherwise,}\end{cases}\] \(N_{b}\) and \(N_{r}\) denote the batch sizes for the training data \(\{x_{b}^{i},g(x_{b}^{i})\}_{i=1}^{N_{b}}\) and \(\{x_{r}^{i},f(x_{r}^{i})\}_{i=1}^{N_{r}}\) respectively, which can be randomly sampled at each iteration of a gradient descent algorithm, and the parameters \(\lambda_{b}\) denote weight coefficients in the loss function, and can effectively assign a different weighting to each individual loss term. The parameters \(\lambda_{b}\), may be user-specified or tuned manually or automatically. In this work, the weight coefficient \(\lambda_{b}\) is auto-tuned by using the back-propagated gradient statistics during training [39]. The residual loss function \(\mathcal{L}_{r}(\theta)\) includes derivatives of the neural network approximation function, it is possible to compute these derivatives using automatic differentiation [5] at any point in \(\Omega\) without the need for manual computation. Simultaneous training of the residual and the obstacle constraint separately tend to fail to converge in our case. In order to overcome this difficulty, we propose \(\mathcal{L}_{r}(\theta)\) as introduced in (2.2) such that, we first optimize the network parameters of \(|R(x_{r}^{i};\theta)|^{2}\) with those of \(|\text{ReLu}(\varphi(x_{r}^{i})-u(x_{r}^{i};\theta))|^{2}=0\) since \(H(u(x_{r}^{i};\theta)-\varphi(x_{r}^{i}))=1\), then vice versa for all \(i=1,..,N_{r}\), and the process is iterated until the desired tolerance is achieved. We can summarize our PINNs method in Figure 2 and the following algorithm: ``` Input:\(\{x_{r}\}_{i=1}^{N_{r}},\ \{x_{b}\}_{i=1}^{N_{b}},\ \lambda_{b}=1,\ tol>0\) 1 Initialize to create the neural network in \(\Omega\) 2 Predict the PINNs solution \(u(\cdot,\theta)\). 3 Define the residual \(R(\cdot,\theta)\). 4 Define the \(ReLu(\varphi(\cdot)-u(\cdot,\theta))\) to penalize \(u(\cdot,\theta)\) that are under the obstacle \(\varphi\). 5while\((1e-2)\times|R(\cdot,\theta)|^{2}+|ReLu(\varphi(\cdot)-u(\cdot,\theta))|^{2}>\ tol\)do 6 Compute loss \(\mathcal{L}(\theta)=\mathcal{L}_{r}(\theta)+\mathcal{L}_{b}(\theta)\). 7 Update the weight coefficients \(\lambda_{b}\) using [39]. 8 Train the loss \(\mathcal{L}(\theta)\) by using Adam's optimizer. 9 Updates weights and biases. Output:\(R(u(x_{r};\theta))\approx 0,\ u(x_{b};\theta)\approx g(x_{b}),\text{ and }u(\cdot,\theta)>=\varphi(\cdot)\) ``` **Algorithm 1**PINNs method for obstacle problems ## 4. Numerical results In this section, we apply our PINNs approach introduced above to various numerical examples. Throughout all case studies we will use fully-connected neural networks to approximate the latent functions representing PDE solutions and unknown boundaries where the needed hyper-parameters for each obstacle are summarized in Table 1. ### One-dimensional linear obstacle problems To check the performance of our proposed PINNs, we first restrict ourselves to studying a variety of one-dimension regular obstacle problems which was previously considered in [14, 34]. Therefore, we have the following Poisson's equation \[-\frac{\partial^{2}u}{\partial^{2}x}=0,\text{ for }x\in\Omega=(0,1), \tag{4.1}\] subject to Dirichlet boundary condition \[u(1)=u(0)=0, \tag{4.2}\] and to the following smooth one-dimension obstacle \[\varphi_{1}(x)=\begin{cases}100x^{2}&\text{ for }0\leq x\leq 0.25,\\ 100x(1-x)-12.5&\text{ for }0.25\leq x\leq 0.75,\\ 100(1-x)^{2}&\text{ for }0.75\leq x\leq 1,\end{cases} \tag{4.3}\] such that \(u\geq\varphi_{1}\) over \(\Omega=(0,1)\). The correspondent analytic solution of (4.1) under constraints (4.2) and (4.3) is \[u(x)=\begin{cases}(100-50\sqrt{2})x&\text{ for }0\leq x\leq\frac{1}{2\sqrt{2}}, \\ 100x(1-x)-12,5&\text{ for }\frac{1}{2\sqrt{2}}\leq x\leq 1-\frac{1}{2\sqrt{2}},\\ (100-50\sqrt{2})(1-x)&\text{ for }1-\frac{1}{2\sqrt{2}}\leq x\leq 1.\end{cases} \tag{4.4}\] Recall that Poisson's equation is one of the pivotal parts of electrostatics where the solution is the potential field caused by a given electric charge or mass density distribution. To this end, we will show that our deep neural network \(u(x;\theta)\) approximates the latent solution \(u(x)\) of (4.1) and satisfies the Dirichlet boundary condition and the obstacle constraint. Subsequently, we can demonstrate that our PINNs also give a good approximation to the solution of the Poisson equation (4.1) under another smooth obstacle \(\varphi_{2}\) defined as follows \[\varphi_{2}(x)=\begin{cases}10\sin(2\pi x)&\text{ for }0\leq x\leq 0.25,\\ 5\cos(\pi(4x-1))+5&\text{ for }0.25\leq x\leq 0.75,\\ 10\sin(2\pi(1-x))&\text{ for }0.75\leq x\leq 1.\end{cases} \tag{4.5}\] \begin{table} \begin{tabular}{||c c c c c c||} \hline Obstacle & \(N_{b}\) & \(N_{r}\) & Layers & Nodes & \(tol\) \\ \hline \hline \(\varphi_{1}\) & 200 & 5000 & 6 & 24 & 5e-4 \\ \(\varphi_{2}\) & 100 & 10000 & 8 & 30 & 1e-3 \\ \(\varphi_{3}\) & 10 & 5000 & 4 & 24 & 4e-5 \\ \(\varphi_{4}\) & 10 & 5000 & 4 & 24 & 2e-6 \\ \(\varphi_{5}\) & 50 & 10000 & 3 & 24 & 3e-6 \\ \(\varphi_{6}\) & 50 & 14000 & 4 & 20 & 3.7e-4 \\ \hline \end{tabular} \end{table} Table 1. Hyper-parameter settings employed throughout all numerical experiments presented in this work. The corresponding exact solution such that \(u\geq\varphi_{2}\) is \[u(x)=\begin{cases}10\sin(2\pi x)&\text{ for }0\leq x\leq 0.25,\\ 10&\text{ for }0.25\leq x\leq 0.75,\\ 10\sin(2\pi(1-x))&\text{ for }0.75\leq x\leq 1.\end{cases} \tag{4.6}\] Figures 3 and 4 present visual comparisons with the exact solution \(u\) for obstacles \(\varphi_{1}\) and \(\varphi_{2}\) and the Dirichlet boundary condition (4.2). As can be seen, the approximations are in good agreement with the exact solutions. These figures indicate that our framework is able to obtain accurate predictions with a relative \(L^{\infty}\)-error. Figure 4. One-dimensional \(\varphi_{2}\)-obstacle Poisson’s equation: (left) The predicted solution against the exact solution (4.6). (right) A plot of the pointwise \(L^{\infty}\)-error estimation. Figure 3. One-dimensional \(\varphi_{1}\)-obstacle Poisson’s equation: (left) The predicted solution against the exact solution (4.4). (right) A plot of the pointwise \(L^{\infty}\)-error estimation. For the sake of completeness, we also considered an example with the same Poisson's equation with Dirichlet boundary and an irregular obstacle \(\varphi_{3}\) defined as follows \[\varphi_{3}(x)=\begin{cases}5x-0.75&\text{ for }0.15\leq x<0.2,\\ 1&\text{ for }0.2\leq x\leq 0.8,\\ -5x+4.25&\text{ for }0.8<x\leq 0.85,\\ 0&\text{ for else}\end{cases} \tag{4.7}\] This obstacle has been used and can be traced back to [34], albeit with a typo in its definition. Therefore, applying Algorithm 1 to equation (4.1) with Dirichlet boundary (4.2) and under the irregular obstacle \(\varphi_{3}\) gives a PINNs solution \(u(x,\theta)\) that fulfills the boundary condition and above the obstacle with some error in the rough corners of the obstacle. Therefore we cannot expect a very high accuracy similar to regular obstacles \(\varphi_{1}\) and \(\varphi_{2}\). This result is similar to the one given by the penalty method presented in [34]. ### One-dimensional nonlinear obstacle problem For our next case, we intend to demonstrate further that the proposed PINNs can also be applied to non-linear problems. Accordingly, we study a classical model of stretching a non-linear elastic membrane over a fixed obstacle defined as \[-\frac{\partial}{\partial x}\left(\frac{\frac{\partial u}{\partial x}}{ \sqrt{\left(1+\left|\frac{\partial u}{\partial x}\right|^{2}\right)}}\right) =0,\text{ for }x\in\Omega=(0,1), \tag{4.8}\] subject to non-homogeneous Dirichlet boundary condition \[u(0)=5,\text{ }u(1)=10, \tag{4.9}\] and to an obstacle \(\varphi_{4}\) which is defined by the following oscillatory function \[\varphi_{4}(x)=10\sin^{2}\pi(x+1)^{2} \tag{4.10}\] such that \(u\geq\varphi_{4}\) over \(\Omega=(0,1)\). Therefore, by using Algorithm 1 for equation (4.8) according to constraints (4.9) and (4.10), the PINNs solution is identical to the obstacle on Figure 5. One-dimensional \(\varphi_{3}\)-obstacle Poisson’s equation: The predicted PINNs solution. the contact set, and straight lines away from it as can be seen in Figure 6 which is similar to the results appeared in [36, 43]. ### Two-dimensional obstacle Our next numerical example is a two-dimensional (2D) problem defined on the domain \(\Omega=[-2,2]\times[-2,2]\) such that \[-\frac{\partial^{2}u}{\partial^{2}x}-\frac{\partial^{2}u}{\partial^{2}y}=0,\ \text{on}\ \Omega, \tag{4.11}\] with the following obstacle \[\varphi_{5}(x,y)=\begin{cases}\sqrt{1-x^{2}-y^{2}},&\text{ for }x^{2}+y^{2}\leq 1,\\ -1,&\text{ otherwise}.\end{cases} \tag{4.12}\] This obstacle-related equation has been widely used by many authors to show the accuracy of their proposed method [36, 43]. Since \(\varphi_{5}\) is a radially-symmetric obstacle, the analytical solution of (4.11) subject to (4.12) is also radially-symmetric such that \[u(x,y)=\begin{cases}\sqrt{1-x^{2}-y^{2}},&\text{ for }x^{2}+y^{2}\leq \beta,\\ \\ -\beta^{2}\frac{\log\left(\frac{\sqrt{x^{2}+y^{2}}}{2}\right)}{\sqrt{1-\beta^ {2}}}&\text{ otherwise},\end{cases} \tag{4.13}\] where \(\beta=0.6979651482...\) which satisfies \(\beta^{2}\left(1-\log\left(\frac{\beta}{2}\right)\right)=1.\) Our PINNs solution and the \(L^{\infty}\)-pointwise difference with the exact solution are presented in Figure 7. It can be seen that the error is focused as peaks near the free boundary, where the function \(u(x,y)\) is no longer \(C^{2}\) (two times differentiable on \(\Omega\))and is relatively small elsewhere. For our final numerical example, we introduce a nonlinear non-harmonic equation of the form \[\begin{cases}-\text{div}\left(|\nabla u|^{p-2}\nabla u\right)+1=0,&\text{ in }\Omega,\\ u=0,&\text{ on }\partial\Omega,\end{cases}\] Figure 6. The PINNs solution of the nonlinear obstacle problem (4.10)-(4.12) which is known as the anisotropic p-Laplacian equation, which is related to many physical and biological models [3, 4]. We test our PINNs framework for \((p=4)\)-Laplacian over the domain \(\Omega=[0,2]\times[0,2]\) subject to the following irregular obstacle \[\varphi_{6}(x,y)=\begin{cases}1,&\text{ for }0.5\leq x\leq 1.5,\\ 0,&\text{ otherwise.}\end{cases}\] As shown in [33], the related exact solution is the following \[u(x,y)=\begin{cases}\frac{3}{4}|x+7.75086|^{\frac{4}{3}}-11.50434,&\text{ if }x<0.5,\\ 1,&\text{ if }0.5\leq x\leq 1.5,\\ \frac{3}{4}|-x+9.75086|^{\frac{4}{3}}-11.50434,&\text{ if }x>1.5\end{cases}\] Our numerical solution and the pointwise \(L^{\infty}\)-error between exact and PINNs solution are Figure 8. Quasi-linear 2D \(p\)-Laplacian obstacle problem: (left) The predicted PINNs solution. (right) The relative pointwise \(L^{\infty}\)-error map. Figure 7. Radially symmetric 2D obstacle problem: (left) The predicted PINNs solution. (right) The relative pointwise \(L^{\infty}\)-error map. presented in Figure 8. Despite the discontinuity of the obstacle which affects the regularity of the solution, our PINNs solution is smooth away from the obstacle and agrees well with the obstacle on its support. Also, it can be seen that the error is concentrated on the boundary of the obstacle, where the function \(u(x,y)\) is discontinuous and is relatively small elsewhere. ## 5. Conclusions and Outlook In summary, we present a PINNs framework for modeling obstacle-related PDEs. The distinguishing feature of our proposed PINNs is the ability to train simultaneously the residual of our PINNs solution to be close to zero if the obstacle constraint is satisfied and to train the PINNs solution to be above the obstacle if else, which gives us the ability to predict the equilibrium position of the solution whose boundary is fixed and which is constrained to lie above the given obstacle. Furthermore, we have tested the effectiveness of our PINNs across a series of numerical case studies involving different formulations of the classical one- and two-dimension PDEs with regular and irregular obstacles. As demonstrated by the numerical experiments presented here, the proposed computational framework is general and flexible in the sense that it requires minimal implementation effort in order to be adapted to different kinds of obstacle problems. The proposed PINNs do not require a large amount of labeled data generated in advance using high-fidelity simulation tools. It also does not rely on domain discretization which is typically done in the classical numerical methods. Despite the success exhibited by the proposed PINNs, it still has some limitations such as the computational cost being generally much larger than that of the methods mentioned in the introductory section since a large number of optimization iterations are required but this can be alleviated via offline training [40, 41]. We could also change the NN architecture including the activation function type, NN width/depth, and connections between different hidden layers such as cutting and adding certain connections. We can tune these attributes of NN architecture automatically by leveraging meta-learning techniques [13, 42]. For long-time integration, one can also use time-parallel methods to simultaneously compute on multiple GPUs for shorter time domains. Another challenging question pertains to handling irregular obstacles with a complicated geometry that may include sharp cusps, mushy regions, or discontinuities which we are trying to solve in our future work. Moreover, although we are using the strong form of PDEs which is easy to execute by automatic differentiation, other weak/variational forms can also be effective, although they require the use of quadrature grids. More generally, here we have to admit that we are still in the very early stages of rigorously understanding the capabilities and limitations of PINNs. In future works, we will investigate the form of the loss function in order to avoid excessive local minima and give pretty neat approximations to more complicated obstacle cases. On the other hand, we will investigate also some interesting related problems, in particular, the double obstacle problems where the function is constrained to lie above one obstacle function and below another, level surfaces, and free boundary problems. ## Funding This work was partially supported by DAAD grants 57417688 and 57512510.
2305.08048
Towards Understanding the Generalization of Graph Neural Networks
Graph neural networks (GNNs) are the most widely adopted model in graph-structured data oriented learning and representation. Despite their extraordinary success in real-world applications, understanding their working mechanism by theory is still on primary stage. In this paper, we move towards this goal from the perspective of generalization. To be specific, we first establish high probability bounds of generalization gap and gradients in transductive learning with consideration of stochastic optimization. After that, we provide high probability bounds of generalization gap for popular GNNs. The theoretical results reveal the architecture specific factors affecting the generalization gap. Experimental results on benchmark datasets show the consistency between theoretical results and empirical evidence. Our results provide new insights in understanding the generalization of GNNs.
Huayi Tang, Yong Liu
2023-05-14T03:05:14Z
http://arxiv.org/abs/2305.08048v1
# Towards Understanding the Generalization of Graph Neural Networks ###### Abstract Graph neural networks (GNNs) are the most widely adopted model in graph-structured data oriented learning and representation. Despite their extraordinary success in real-world applications, understanding their working mechanism by theory is still on primary stage. In this paper, we move towards this goal from the perspective of generalization. To be specific, we first establish high probability bounds of generalization gap and gradients in transductive learning with consideration of stochastic optimization. After that, we provide high probability bounds of generalization gap for popular GNNs. The theoretical results reveal the architecture specific factors affecting the generalization gap. Experimental results on benchmark datasets show the consistency between theoretical results and empirical evidence. Our results provide new insights in understanding the generalization of GNNs. Machine Learning, Generalization, Generalization, Graph Neural Networks ## 1 Introduction Graph-structured data (Zhu et al., 2021) exists widely in real-world applications. As one of the most powerful tools to process graph-structured data, GNNs (Gori et al., 2005; Scarselli et al., 2009) are widely adopted in Computer Vision (Qi et al., 2017; Johnson et al., 2018; Landrieu and Simonovsky, 2018; Satorras and Estrach, 2018), Natural Language Processing (Bastings et al., 2017; Beck et al., 2018; Song et al., 2018), Recommendation Systems (Ying et al., 2018; Fan et al., 2019; He et al., 2020; Deng et al., 2022), AI for Science (Sanchez-Gonzalez et al., 2020; Pfaff et al., 2021; Shen et al., 2021; Han et al., 2022), to name a few. There are two main ways to view modern GNNs, _i.e._, spatial domain perspective (Kipf and Welling, 2017; Velickovic et al., 2018; Xu et al., 2018; Xu et al., 2018) and spectral domain perspective (Defferrard et al., 2016; Gasteiger et al., 2019; Liao et al., 2019; Chien et al., 2021; He et al., 2021). The former regards GNN as the process of combining and updating features according to adjacent relationships. The latter treats GNN as a filtering function applied on input features. Recent developments of GNNs are summarized in (Zhou et al., 2020; Wu et al., 2021; Zhang et al., 2022). Despite the empirical success of GNNs, establishing theories to explain their behaviors is still in its infancy. Recent works towards this direction includes understanding oversmoothing (Li et al., 2018; Zhao and Akoglu, 2020; Oono and Suzuki, 2020; Rong et al., 2020), interpretability (Ying et al., 2019; Luo et al., 2020; Vu and Thai, 2020; Yuan et al., 2020; 2021), expressiveness (Xu et al., 2019; Chen et al., 2019; Maron et al., 2019; Dehmanny et al., 2019; Feng et al., 2022), and generalization (Scarselli et al., 2018; Du et al., 2019; Verma and Zhang, 2019; Garg et al., 2020; Zhang et al., 2020; Oono and Suzuki, 2020; Lv, 2021; Liao et al., 2021; Esser et al., 2021; Cong et al., 2021). This work focuses on the last branch. Some previous works adopt the classical techniques such as Vapnik-Chervonenkis dimension (Scarselli et al., 2018), Rademacher complexity (Lv, 2021; Garg et al., 2020) and algorithm stability (Verma and Zhang, 2019) to provide generalization bounds for GCN (Kipf and Welling, 2017) and more general message passing neural networks. However, in their analysis, the original graph is split into subgraphs composed of central node and its neighbors, which are treated as independent samples. This setting significantly differs from real implementation that training nodes are sampled without replacement from full nodes and the test nodes are visible during training (El-Yaniv and Pechyony, 2007; Oono and Suzuki, 2020), resulting a gap between theory and practice. To tackle this issue, recent works (Oono and Suzuki, 2020; Esser et al., 2021) incorporate the learning schema of GNNs into the category of transductive learning and derive more realistic results. However, there are still some drawbacks of these works. First, the analysis in (Oono and Suzuki, 2020) is oriented to multi-scale GNNs that differ a lot from modern GNNs in network architecture. Besides, their analysis is limited to the AdaBoost-like optimization procedure, and whether the technique can be applied to general optimization algorithms such as stochastic gradient descent (SGD) is unknown. Second, the upper bound in (Esser et al., 2021) is of slow order and fails to provide meaningful learning guarantee for node classification in large-scale scenarios. Third, (Cong et al., 2021) only consider spectral-based GNNs with fixed coefficients, leaving spectral-based GNNs with learnable coefficients (Chien et al., 2021) unexplored. Motivated by the aforementioned challenges, under transductive setting, we study the generalization gap of GNNs for node classification task with consideration of stochastic optimization algorithm. First, we establish high probability bounds of generalization gap and gradients under transductive setting, and derive high probability bounds of test error under gradient dominant condition. Next, we provide a comprehensive analysis on popular GNNs including both linear and non-linear models and derive the upper bound of the Lipschitz continuity and Holder smoothness constants, by which we compare their generalization capability. The results show that SGC (Wu et al., 2019) and APPNP (Gasteiger et al., 2019) can achieve smaller generalization gap than GCN (Kipf and Welling, 2017). Besides, the unconstrained coefficients in spectral GNNs may yield a large generalization gap. Our results reveal why shallow models yield comparable and even superior performance from the perspective of learning theory, and provide theoretical supports for widely used techniques such as early stop and drop edge (Rong et al., 2020). Experimental results on benchmark datasets show that the theoretical findings are generally consistent with the practical evidences. ## 2 Related Work ### Generalization Analysis of GNNs Existing studies on the generalization of GNNs general fall into two categories: graph classification task and node classification task. **Graph classification task.**(Liao et al., 2021) is the first work to establish generalization bounds of GCN and message passing neural networks by PAC-Bayesian approach. The authors in (Ju et al., 2023) further improve their results and provide the lower bound. Besides, neural tangent kernels (Jacot et al., 2018) are also used to analyze the generalization of infinitely wide GNNs trained by gradient descent (Du et al., 2019). Different from that, this work focus on node classification task that is more challenging. **Node classification task.** The authors in (Scarselli et al., 2018) analyze the generalization capability of GNNs by Vapnik-Chervonenkis dimension. (Verma and Zhang, 2019) is the first work to provide generalization bounds of one-layer GCN by algorithm stability which is further extended to multi-layer GCNs in (Zhou and Wang, 2021). The work (Garg et al., 2020) converts the graph into individual local node-wise computation tree and bound their generalization bound respectively by Rademacher Complexity. The aforementioned works rely on the assumption that converting a graph into subgraphs, which differs a lot from realistic implementation. Observing that, (Oono and Suzuki, 2020) makes the first step that adopting the transductive learning framework to analyze multi-scale GNNs. This framework originates from (Vapnik, 1998; 2006), and is further developed in (El-Yaniv and Pechyony, 2006; 2007) where the authors propose transductive stability and Transductive Rademacher complexity to measure the generalization capability of transductive learner. The work most related to ours is (Cong et al., 2021) and (Esser et al., 2021), where the authors establish generalization bound for GNNs and its variants by transductive uniform stability and transductive Rademacher complexity respectively. However, the derived bound in (Esser et al., 2021) is of slow order, and whether their technique can be applied on SGD is still unknown. Different from (Cong et al., 2021) that analyzing full-batch gradient descent, we analyze a more complex setting, _i.e._, transductive learning under SGD, due to the involve of randomness in optimization. Besides, there are some works orthogonal to ours, _e.g._, analyzing the generalization capability of GNNs training with topology-sampling (Li et al., 2022) or on large random graphs (Keriven et al., 2020). ### Out-of-Distribution (OOD) Generalization on Graphs Much efforts are devoted to the study of OOD generalization on graphs (Li et al., 2022) in recent years, due to the occurs of distribution shift in real-world scenarios. An adversarial learning schema (Wu et al., 2022) is proposed to minimize the mean and variance of risks from multiple environments. The authors in (Yang et al., 2022) propose a two-stage training schema to tackle distribution shift on molecular graphs. Energy-based message passing scheme is show to be effective in enhancing the OOD detection performance of GNNs (Wu et al., 2023). Current work (Yang et al., 2023) shows that the spurious performance of GNNs may come from its intrinsic generalization capability rather than expressivity. Besides, there are also some work focus on the reasoning (Xu et al., 2020), extrapolation ability (Xu et al., 2021; Bevilacqua et al., 2021), and generalization from small to large graphs (Yehudai et al., 2021). ## 3 Preliminaries ### Notations Let \(\mathcal{G}=\{\mathcal{V},\mathcal{E}\}\) be an given undirected graph with \(n=|\mathcal{V}|\) nodes. Each node is an instance \(z_{i}=(\mathbf{x}_{i},y_{i})\) containing feature \(\mathbf{x}_{i}\) and label \(y_{i}\) from some space \(\mathcal{Z}=\mathcal{X}\times\mathcal{Y}\). Let \(\mathbf{X}\) be the feature matrix where the \(i\)-th row \(\mathbf{X}_{i*}\) is the node feature \(\mathbf{x}_{i}\). Let \(\mathbf{A}\) and \(\mathbf{D}\) be the adjacency matrix and the diagonal degree matrix respectively, where \(\mathbf{D}_{ii}=\sum_{j=1}^{n}\mathbf{A}_{ij}\) Denote by \(\tilde{\mathbf{A}}=(\mathbf{D}+\mathbf{I}_{n})^{-\frac{1}{2}}(\mathbf{A}+\mathbf{I }_{n})(\mathbf{D}+\mathbf{I}_{n})^{-\frac{1}{2}}\) the normalized adjacency matrix with self-loops and \(\sqrt{|\mathcal{Y}|}\) the number of categories. We focus on the transductive learning setting in this work, _i.e._, all features together with the randomly sampled labels are constructed as training set. Let \(S=\{\mathbf{x}_{i},y_{i}\}_{i=1}^{m+u}\) be the set of instances where \(m+u=n\). Without loss of generality (w.l.o.g.), let \(\{y_{i}\}_{i=1}^{m}\) be the selected labels, our task is to predict the labels of samples \(\{\mathbf{x}_{i}\}_{i=m+1}^{m+u}\) by a learner (model) trained on \(\{\mathbf{x}_{i}\}_{i=1}^{m+u}\bigcup\{y_{i}\}_{i=1}^{m}\). This setting is widely adopted in node classification task (Yang et al., 2016; Kipf and Welling, 2017) where the training and test nodes are determined by a random partition. From now on, we limit the scope of the learner to a given GNN and let \(\{\mathbf{W}_{h}\}_{h=1}^{H}\) be its learnable parameters. Since \(\mathbb{R}^{p\times q}\) and \(\mathbb{R}^{pq}\) are isomorphic, the analysis in this work is oriented to the vector space for concise. To this end, we use a unified vector \(\mathbf{w}=[\operatorname{vec}\left[\mathbf{W}_{1}\right];\dots; \operatorname{vec}\left[\mathbf{W}_{H}\right]]\) to represent the collection of \(\{\mathbf{W}_{h}\}_{h=1}^{H}\), where \(\operatorname{vec}[\cdot]\) is the vectorization operator that transforms a given matrix into vector, _i.e._, \(\operatorname{vec}\left[\mathbf{W}\right]=\left[\mathbf{W}_{*1};\cdots; \mathbf{W}_{*q}\right]\) for \(\mathbf{W}\in\mathbb{R}^{p\times q}\). Here \(\mathbf{W}_{*i}\) is the \(i\)-th column of \(\mathbf{W}\). For \(\mathbf{w}\in\mathcal{W}\), the training and test error is defined as \(R_{m}(\mathbf{w})\triangleq\frac{1}{m}\sum_{i=1}^{m}\ell(\mathbf{w};z_{i})\) and \(R_{u}(\mathbf{w})\triangleq\frac{1}{u}\sum_{i=m+1}^{m+u}\ell(\mathbf{w};z_{i})\) respectively, where \(\ell:\mathcal{W}\times\mathcal{Z}\mapsto\mathbb{R}_{+}\) is the loss function. In this work, we follow previous studies (El-Yaniv and Pechyony, 2007; Oono and Suzuki, 2020b; Esser et al., 2021) and define the transductive generalization gap by \(|R_{m}(\mathbf{w})-R_{u}(\mathbf{w})|\). Since the label of test examples are not available, the optimization process is finding parameters to minimize the the training error \(R_{m}(\mathbf{w})\). Much efforts (Duchi et al., 2011; Kingma and Ba, 2015) are devoted to solve this stochastic optimization problem, and we mainly focus on SGD (Summarized in Algorithm 1) in this work. Now we introduce notations used in the rest of this paper. Denote by \(\left\|\cdot\right\|_{2}\) and \(\left\|\cdot\right\|\) the \(2\)-norm of vector and spectral norm of matrix, respectively. Let \(\mathbf{w}^{(1)}\) be the initialization weight of model, we focus on the space \(\mathcal{W}=B(\mathbf{w}^{(1)};r),r\geq 1\) in this work, where \(B(\mathbf{w}^{(1)};r)\triangleq\left\{\mathbf{w}:\left\|\mathbf{w}-\mathbf{w} ^{(1)}\right\|_{2}\leq r\right\}\) is the ball with radius \(r\). Denote by \(\nabla\ell(\cdot;z)\) the gradient of \(\ell\) with respective to (w.r.t.) the first argument. Denote by \(b_{g}=\sup_{z\in\mathcal{Z}}\left\|\nabla\ell(\mathbf{w}^{(1)};z)\right\|_{2}\) the supremum of gradient with initialized parameter and \(b_{\ell}=\sup_{z\in\mathcal{Z}}\left|\ell(\mathbf{w}^{(1)};z)\right|_{2}\) the supernum of loss value with initialed parameter. Let \(\hat{\mathbf{w}}\in\operatorname*{argmin}_{\mathbf{w}\in\mathcal{W}}R_{m}( \mathbf{w})\) be the parameters of training error minimizer. We denote by \(\sigma(\cdot)\) the activation function. ``` Input: Initial parameter \(\mathbf{w}^{(1)}\), learning rates \(\{\eta_{t}\}\), training set \(\{\mathbf{x}_{i}\}_{i=1}^{m+u}\cup\{y_{i}\}_{i=1}^{m}\). for\(t=1\)to\(T\)do Randomly draw \(j_{t}\) from the uniform distribution over the set \(\{j:j\in[m]\}\). Update parameters by \(\mathbf{w}^{(t+1)}=\mathbf{w}^{(t)}-\eta_{t}\nabla\ell(\mathbf{w}^{(t)};z_{j_{t }})\). endfor ``` **Algorithm 1** SGD for Transductive Learning ### Assumptions In this part, we present the assumptions used in this paper. **Assumption 3.1**.: Assume that there exists a constant \(c_{X}>0\) such that \(\|\mathbf{x}\|_{2}\leq c_{X}\) holds for all \(\mathbf{x}\in\mathcal{X}\). **Assumption 3.2**.: Assume that there exists a constant \(c_{W}>0\) such that \(\|\mathbf{W}_{h}\|\leq c_{W},h\in[H]\) for \(\mathbf{w}\in B(\mathbf{w}^{(1)};r)\). _Remark 3.3_.: Assumption 3.1 requires that input features are bounded (Verma and Zhang, 2019). This assumption can be satisfied by applying normalization on features. Assumption 3.2 means that the parameters during the training process are bounded, which is a common assumption in generalization analysis of GNNs (Garg et al., 2020; Liao et al., 2021; Cong et al., 2021; Esser et al., 2021). These two assumptions are necessary to analyze the Lipschitz continuity and Holder smoothness of objective w.r.t. \(\mathbf{w}\). **Assumption 3.4**.: Assume that the activation function \(\sigma(\cdot)\) is \(\tilde{\alpha}\)-Holder smooth. To be specific, let \(P>0\) and \(\tilde{\alpha}\in(0,1]\), for all \(\mathbf{u},\mathbf{v}\in\mathbb{R}^{d}\), \[\|\sigma^{\prime}(\mathbf{u})-\sigma^{\prime}(\mathbf{v})\|_{2}\leq P\|\mathbf{u} -\mathbf{v}\|_{2}^{\tilde{\alpha}}.\] _Remark 3.5_.: It can be verified that Assumption 3.4 implies Lipschitz continuity of activation function if \(\tilde{\alpha}=0\). Besides, Assumption 3.4 implies the smoothness of activation function if \(\tilde{\alpha}=1\). Therefore, Assumption 3.4 is much milder than the assumption in previous work (Verma and Zhang, 2019; Cong et al., 2021) that requires the activation function is smooth. For the convenience of analysis while not yielding a large gap between theory and practice, we construct a modified ReLU function (See Appdendrix A) with hyperparameter \(q\in(1,2]\) that satisfies Assumption 3.4 and has a tolerable approximate error to vanilla ReLU function. **Assumption 3.6**.: Assume that there exist a constant \(G>0\) such that for all \(z\in S\) \[\sqrt{\eta_{t}}\left\|\nabla\ell(\mathbf{w}_{t};z)\right\|_{2}\leq G\] holds \(\forall\ t\in\mathbb{N}\), where \(\{\eta_{t}\}_{t=1}^{T}\) is learning rates. _Remark 3.7_.: A formal definition of \(\nabla\ell(\mathbf{w};z)\) is provided in Lemma A.4 in the Appendix. Assumption 3.6 (Lei and Tang, 2021; Li and Liu, 2021) means that the product of gradient and the square root of learning rate is bounded, which is milder than the widely used bounded gradient assumption (Hardt et al., 2016; Kuzborskij and Lampert, 2018), since the learning rate tends to zero during the iteration. **Assumption 3.8**.: Assume that there exists a constant \(\sigma_{0}>0\) such that for \(\forall\ t\in\mathbb{N}_{+}\), the following inequality holds \[\mathbb{E}_{j_{t}}\left[\|\nabla\ell(\mathbf{w});z_{j_{t}})\right\|_{2}\leq\sigma_{ 0}^{2}.\] _Remark 3.9_.: Assumption 3.8 requires the boundness of variances of stochastic gradients, which is a standard assumption in stochastic optimization studies (Kuzborskij and Lampert, 2018; Lei and Tang, 2021; Li and Liu, 2021). ## 4 Theoretical Results In this section, we first present the high probability bounds of generalization gap and excess risks under transductive learning in Section 4.1. After that, we turn to specific examples and provide results of some popular GNNs in Section 4.2. Please refer to the Appendix for complete proofs. ### General Results of Transductive SGD We first analyze properties of the objective function \(\ell\) and provide the following proposition. **Proposition 4.1** (Informal).: _Suppose Assumptions 3.1, 3.2, and 3.4 hold. Denote by \(\mathcal{F}\) a specific GNN, for any \(\mathbf{w},\mathbf{w}^{\prime}\in\mathcal{W}\) and \(z\in S\), the objective \(\ell(\mathbf{w};z)\) satisfies_ \[|\ell(\mathbf{w};z)-\ell(\mathbf{w}^{\prime};z)|\leq L_{\mathcal{F}}\| \mathbf{w}-\mathbf{w}^{\prime}\|_{2}, \tag{1}\] _and_ \[\begin{split}&\|\nabla\ell(\mathbf{w};z)-\nabla\ell(\mathbf{w}^{ \prime};z)\|\\ \leq& P_{\mathcal{F}}\max\left\{\|\mathbf{w}- \mathbf{w}^{\prime}\|_{2}^{\tilde{\alpha}},\|\mathbf{w}-\mathbf{w}^{\prime} \|_{2}\right\},\end{split} \tag{2}\] _with constant \(L_{\mathcal{F}}\) and \(P_{\mathcal{F}}\)._ _Remark 4.2_.: We provide more detailed analysis to \(L_{\mathcal{F}}\) and \(P_{\mathcal{F}}\) in Section 4.2. Both \(L_{\mathcal{F}}\) and \(P_{\mathcal{F}}\) depend on the specific network architecture \(\mathcal{F}\) of GNNs. Thus, the upper bound of generalization gap vary by the architecture. Our first main result is high probability bounds on the transductive generalization gap, as presented in Theorem 4.3. **Theorem 4.3**.: _Suppose Assumptions 3.1, 3.2, 3.4, 3.6, and 3.8 hold. Suppose that the learning rate \(\{\eta_{t}\}\) satisfies \(\eta_{t}=\frac{1}{t+\ell_{0}}\) such that \(t_{0}\geq\max\{(2P)^{1/\alpha},1\}\). For any \(\delta\in(0,1)\), with probability \(1-\delta\),_ 1. _If_ \(\alpha\in(0,\frac{1}{2})\)_, we have_ \[R_{u}(\mathbf{w}^{(T+1)})-R_{m}(\mathbf{w}^{(T+1)})\] \[= \mathcal{O}\bigg{(}L_{\mathcal{F}}\frac{(m+u)^{\frac{3}{2}}}{mu} \log^{\frac{1}{2}}(T)T^{\frac{1-2\alpha}{2}}\log\bigg{(}\frac{1}{\delta}\bigg{)} \bigg{)}.\] 2. _If_ \(\alpha=\frac{1}{2}\)_, we have_ \[R_{u}(\mathbf{w}^{(T+1)})-R_{m}(\mathbf{w}^{(T+1)})\] \[= \mathcal{O}\bigg{(}L_{\mathcal{F}}\frac{(m+u)^{\frac{3}{2}}}{mu} \log(T)\log\bigg{(}\frac{1}{\delta}\bigg{)}\bigg{)}.\] 3. _If_ \(\alpha\in(\frac{1}{2},1]\)_, we have_ \[R_{u}(\mathbf{w}^{(T+1)})-R_{m}(\mathbf{w}^{(T+1)})\] \[= \mathcal{O}\bigg{(}L_{\mathcal{F}}\frac{(m+u)^{\frac{3}{2}}}{mu} \log^{\frac{1}{2}}(T)\log\bigg{(}\frac{1}{\delta}\bigg{)}\bigg{)}.\] _Remark 4.4_.: Theorem 4.3 shows that the transductive generalization gap depends on the training/test data size \(m/u\), network architecture related Lipschitz continuity constant \(L_{\mathcal{F}}\), and the number of iterations \(T\). Generally, our upper bounds are of order \(\mathcal{O}\left((\frac{1}{m}+\frac{1}{u})\sqrt{m+u}\right)\), which is much sharper than the bound \(\mathcal{O}\left((\frac{1}{m}+\frac{1}{u})(m+u)+\log(m+u)\right)\) in previous work (Esser et al., 2021). Note that with the increase of data size \(m+u\), the bound in (Esser et al., 2021) become increasing larger and fail to provide a reasonable generalization guarantee. This seriously restricts its application in large-scale node classification scenarios where the order of \(m+u\) is usually millions. Our results address these drawbacks and provide more applicable generalization guarantee for GNNs. Besides, the bound provided in (Esser et al., 2021) does not consider the specific optimization and has difficulty in revealing the influence of \(T\) on generalization gap. Our result shows that the generalization gap becomes larger when the number of \(T\) increases, resulting in the over-fitting phenomenon. Thus, early stop may be beneficial for yielding a smaller generalization gap, which is widely adopted in implementation of modern GNNs (Kipf and Welling, 2017; Chen et al., 2020). It can be seen that the generalization gap is positively related to the Lipschitz continuity constant \(L_{\mathcal{F}}\) determined by specific network architecture \(\mathcal{F}\). Thus, larger \(L_{\mathcal{F}}\) leads to larger upper bounds of generalization gap, showing that the network architecture of GNN also have a significant influence on the generalization gap (See Section 4.2 for more detail). The upper bound of generalization gap in (Cong et al., 2021) also increase with \(T\) when the objective is optimized by full-batch gradient descent. This is not surprise since it can be seen as a special case of SGD where the batch size is equal to the size of traning samples. Our second main result is high probability bounds of the gradients on training and test data. **Theorem 4.5**.: _Suppose Assumptions 3.1, 3.2, 3.4, 3.6, and 3.8 hold. Suppose that the learning rate \(\{\eta_{t}\}\) satisfies \(\eta_{t}=\frac{1}{t+\ell_{0}}\) such that \(t_{0}\geq\max\{(2P)^{1/\alpha},1\}\). For any \(\delta\in(0,1)\), with probability \(1-\delta\),_ 1. _If_ \(\alpha\in(0,\frac{1}{2})\)_, we have_ \[\Big{\|}\nabla R_{m}(\mathbf{w}^{(T+1)})-\nabla R_{u}(\mathbf{w}^ {(T+1)})\Big{\|}_{2}\] \[= \mathcal{O}\bigg{(}\frac{(m+u)^{\frac{3}{2}}}{mu}\log^{\frac{1}{ 2}}(T)T^{\frac{1-2\alpha}{2}}\log\bigg{(}\frac{1}{\delta}\bigg{)}\bigg{)}.\] _._ 2. _If_ \(\alpha=\frac{1}{2}\)_, we have_ \[\left\|\nabla R_{m}(\mathbf{w}^{(T+1)})-\nabla R_{u}(\mathbf{w}^{(T+ 1)})\right\|_{2}\] \[= \mathcal{O}\bigg{(}\frac{(m+u)^{\frac{3}{2}}}{mu}\log(T)\log\left( \frac{1}{\delta}\right)\bigg{)}.\] 3. _If_ \(\alpha\in(\frac{1}{2},1]\)_, we have_ \[\left\|\nabla R_{m}(\mathbf{w}^{(T+1)})-\nabla R_{u}(\mathbf{w}^ {(T+1)})\right\|_{2}\] \[= \mathcal{O}\bigg{(}\frac{(m+u)^{\frac{3}{2}}}{mu}\log^{\frac{1}{ 2}}(T)\log\left(\frac{1}{\delta}\right)\bigg{)}.\] _Remark 4.6_.: Theorem 4.5 provides high probability bounds for the generalization gap of gradients under transductive setting. Overall, the generalization gap we derive is still of order \(\mathcal{O}\left((\frac{1}{m}+\frac{1}{u})\sqrt{m+u}\right)\), which is applicable in real-world large-scale graph dataset. Besides, the generalization gap of gradients increases with the increase of \(T\), showing that a smaller number of iterations helps achieving a smaller generalization gap of gradients. Since the generalization performance is determined by both training error and generalization gap, we provide a upper bound of the test error under a special case that the objective satisfies the following PL condition. **Assumption 4.7**.: Suppose that there exists a constant \(\mu\) such that for all \(\mathbf{w}\in\mathcal{W}\), \[R_{m}(\mathbf{w})-R_{m}(\hat{\mathbf{w}}^{*})\leq\frac{1}{2\mu}\left\|\nabla R _{m}(\mathbf{w})\right\|_{2},\] holds for the given set \(S\) from \(\mathcal{Z}\). _Remark 4.8_.: Assumption 4.7 is also named as gradient dominance condition in learning theory studies, indicating that the difference between the optimal training error and the current training error can be upper bounded by the quadratic function of the gradient on training instances. This assumption is widely adopted in nonconvex learning (Zhou et al., 2018; Xu and Zeevi, 2020; Lei and Tang, 2021; Li and Liu, 2021), and has been verified in over-parameterized systems including wide neural networks (Liu et al., 2020). This assumption only appears in Theorem 4.9. **Corollary 4.9**.: _Suppose Assumptions 3.1, 3.2, 3.4, 3.6, 3.8, and 4.7 hold. Suppose that the learning rate \(\{\eta_{t}\}\) satisfies \(\eta_{t}=\frac{2}{\mu(t+t_{0})}\) such that \(t_{0}\geq\max\{\frac{2}{\mu}(2P)^{\frac{1}{\alpha}},1\}\). For any \(\delta\in(0,1)\), with probability \(1-\delta\),_ 1. _If_ \(\alpha\in(0,\frac{1}{2})\)_, we have_ \[R_{u}(\mathbf{w}^{(T+1)})-R_{m}(\mathbf{w}^{*})\] \[= \mathcal{O}\bigg{(}L_{\mathcal{F}}\frac{(m+u)^{\frac{3}{2}}}{mu} \log^{\frac{1}{2}}(T)T^{\frac{1}{2}-\alpha}\log\left(\frac{1}{\delta}\right)+ \frac{1}{T^{\alpha}}\bigg{)},\] 2. _If_ \(\alpha=\frac{1}{2}\)_, we have_ \[R_{u}(\mathbf{w}^{(T+1)})-R_{m}(\mathbf{w}^{*})\] \[= \mathcal{O}\bigg{(}L_{\mathcal{F}}\frac{(m+u)^{\frac{3}{2}}}{mu} \log(T)\log\left(\frac{1}{\delta}\right)+\frac{1}{T^{\alpha}}\bigg{)}.\] 3. _If_ \(\alpha\in(\frac{1}{2},1)\)_, we have_ \[R_{u}(\mathbf{w}^{(T+1)})-R_{m}(\mathbf{w}^{*})\] \[= \mathcal{O}\bigg{(}L_{\mathcal{F}}\frac{(m+u)^{\frac{3}{2}}}{mu} \log^{\frac{1}{2}}(T)\log(1/\delta)+\frac{1}{T^{\alpha}}\bigg{)}.\] 4. _If_ \(\alpha=1\)_, we have_ \[R_{u}(\mathbf{w}^{(T+1)})-R_{u}(\mathbf{w}^{*})\] \[= \mathcal{O}\bigg{(}L_{\mathcal{F}}\frac{(m+u)^{\frac{3}{2}}}{mu} \log^{\frac{1}{2}}(T)\log(1/\delta)+\frac{1}{T^{\alpha}}\bigg{)}.\] _Remark 4.10_.: Theorem 4.9 shows that under Assumption 4.7, the test error are determined by the minimal training error, optimization error and generalization gap. The minimal training error reflects how well the model fits data, which is a measure of the expressive ability. The first and the second term in the slack terms are generalization gap and optimization error, respectively. With the increase of \(T\), the generalization gap increase while the optimization error decrease. Therefore, it is necessary to carefully choose a proper number of iterations in order to balance the trade-off between optimization and generalization. In the implementation of most GNNs studies (Kipf and Welling, 2017; Velickovic et al., 2018; Chien et al., 2021; He et al., 2021), early stop is widely adopted and \(T\) is determined by the performance of model on validation set. Thus, our results are consistent with real implementations. It is worth point out that although the results in this section is oriented to the case that the objective has two parameters (_e.g._, GCN, APPNP, and GPR-GNN in Section 4.2), results for other cases that the objective has one parameter (_e.g._, SGC in Section 4.2) or three parameters (_e.g._, GCNII in Section 4.2) have the same form when neglecting the constant factors. Meanwhile, the assumptions need to be modified correspondingly. Readers are referred to the Appendix for detailed discussion. ### Cases Study of Popular GNNs We have established high probability bounds for transductive generalization gap in Theorem 4.3. In this part, we analyze the upper bounds of architecture related constant \(L_{\mathcal{F}}\) and \(P_{\mathcal{F}}\), with that the upper bound of generalization gap can be determined. Five representative GNNs, including GCN, GCNII, SGC, APPNP, and GPR-GNN, are selected for analysis. The loss function \(\ell\) is cross-entropy loss and denote by \(\hat{\mathbf{Y}}\) the prediction. For concise, we do not consider the bias term, since it can be verified that \(\langle\mathbf{w},\mathbf{x}\rangle+b=\langle\tilde{\mathbf{w}},\tilde{\mathbf{x}}\rangle\) holds with \(\tilde{\mathbf{w}}=[\mathbf{w};b]\) and \(\tilde{\mathbf{x}}=[\mathbf{x};1]\). **GCN.** The work (Kipf and Welling, 2017) proposes to aggregate features from one-hop neighbor nodes. The feature propagation process of a two-layer GCN model is \[\hat{\mathbf{Y}}=\mathrm{Softmax}\big{(}g(\tilde{\mathbf{A}})\sigma(g(\tilde{ \mathbf{A}})\mathbf{X}\mathbf{W}_{1})\mathbf{W}_{2}\big{)}, \tag{3}\] where \(g(\tilde{\mathbf{A}})=\tilde{\mathbf{A}}\) and \(\mathbf{W}_{1}\in\mathbb{R}^{d\times h},\mathbf{W}_{2}\in\mathbb{R}^{h\times |\mathcal{Y}|}\) are parameters. **Proposition 4.11**.: _Suppose Assumptions 3.1, 3.2, and 3.4 hold, then the objective \(\ell(\mathbf{w};z)\) is \(L_{\mathcal{F}}\)-Lipschitz continuous and Holder smooth w.r.t. \(\mathbf{w}=[\mathrm{vec}\left[\mathbf{W}_{1}\right];\mathrm{vec}\left[ \mathbf{W}_{1}\right]]\). Concretely, the Lipschitz continuity constant \(L_{\mathcal{F}}\) is \(L_{\mathrm{GCN}}=2c_{X}c_{W}\big{\|}\tilde{\mathbf{A}}\big{\|}_{\infty}^{2}\)._ Due to the tedious formulation, we provide the concrete value of \(P_{\mathcal{F}}\) in the Appendix. Proposition 4.11 demonstrates that \(L_{\mathrm{GCN}}\) mainly depends on factors \(\|g(\tilde{\mathbf{A}})\|_{\infty}\), \(c_{X}\), and \(c_{W}\). Let \(\mathrm{deg}_{\min}\) and \(\mathrm{deg}_{\max}\) be the minimum and maximum node degree, respectively. By Lemma A.1 in Appendix A, \[\big{\|}\tilde{\mathbf{A}}\big{\|}_{\infty}\leq\sqrt{\frac{\mathrm{deg}_{ \max}+1}{\mathrm{deg}_{\min}+1}}. \tag{4}\] It can be found that the generalization gap decreases with the decrease of the maximum node degree, which could be achieved by removing edges. This explains in some sense why the DropEdge (Rong et al., 2020) technique is beneficial for alleviating the over-fitting problem from the perspective of learning theory. Besides, for GCN trained on sampled sub-graphs \(\{\mathcal{G}_{i}\}_{i=1}^{n}\), the Lipschitz continuity constant is \(L_{\mathrm{GCN}}=2c_{X}c_{W}\max_{i\in[n]}\big{\|}\tilde{\mathbf{A}}^{[i]} \big{\|}_{\infty}^{2}\), where \(\tilde{\mathbf{A}}^{[i]}\) is the normalized adjacency matrix with self-loop of \(\mathcal{G}_{i}\). Since only a portion of neighboring nodes are preserved during sub-graphs sampling (Hamilton et al., 2017; Zeng et al., 2020; 2021), the maximum node degree of each sub-graph is smaller than that of initial graph, implying \(\max_{i\in[n]}\big{\|}\tilde{\mathbf{A}}^{[i]}\big{\|}_{\infty}\leq\big{\|} \tilde{\mathbf{A}}\big{\|}_{\infty}\) holds. Thus, Proposition 4.11 shows that training on sampled sub-graphs are beneficial to achieve smaller generalization gap. Lastly, the spectral norm of learning parameters also has an effect on the generalization gap. Thus, the commonly used \(L_{2}\) regularization technique is beneficial to reduce the generalization gap. **GCNII.** The authors in (Chen et al., 2020) propose to relieve over-smoothing by initial residual and identity mapping. Denote by \(\mathbf{H}^{(0)}=\sigma(\mathbf{X}\mathbf{W}_{0})\) the initial representation. The forward propagation of a two-layer GCNII model is \[\mathbf{H}^{(1)} =\sigma\Big{(}((1-\alpha_{1})g(\tilde{\mathbf{A}})\mathbf{H}^{( 0)}+\alpha_{1}\mathbf{H}^{(0)})\Psi(\beta_{1},\mathbf{W}_{1})\Big{)},\] \[\mathbf{H}^{(2)} =\sigma\Big{(}((1-\alpha_{1})g(\tilde{\mathbf{A}})\mathbf{H}^{(1 )}+\alpha_{1}\mathbf{H}^{(0)})\Psi(\beta_{2},\mathbf{W}_{2})\Big{)},\] \[\hat{\mathbf{Y}} =\mathrm{softmax}\big{(}\mathbf{H}^{(2)}\mathbf{W}_{3}\big{)},\] where \(\Psi(\beta,\mathbf{W})=(1-\beta)\mathbf{I}+\beta\mathbf{W}\) and \(g(\tilde{\mathbf{A}})=\tilde{\mathbf{A}}\). \(\mathbf{W}_{1}\in\mathbb{R}^{d\times h}\), \(\mathbf{W}_{2}\in\mathbb{R}^{h\times h}\), and \(\mathbf{W}_{3}\in\mathbb{R}^{h\times|\mathcal{Y}|}\) are parameters. **Proposition 4.12**.: _Suppose Assumptions 3.1, 3.2, and 3.4 hold, then the objective \(\ell(\mathbf{w};z)\) is \(L_{\mathcal{F}}\) Lipschitz continuous and Holder smooth w.r.t. \(\mathbf{w}=[\mathrm{vec}\left[\mathbf{W}_{0}\right];\mathrm{vec}\left[ \mathbf{W}_{1}\right];\mathrm{vec}\left[\mathbf{W}_{2}\right];\mathrm{vec} \left[\mathbf{W}_{3}\right]]\)._ _Specifically, denote by \(C_{\ell}=1-\beta_{\ell}+\beta_{\ell}c_{W},\ell\in[2]\) and_ \[B_{1} =c_{X}c_{W}C_{1}\big{(}(1-\alpha_{1})\big{\|}\tilde{\mathbf{A}} \big{\|}_{\infty}+\alpha_{1}\big{)},\] \[B_{2} =\big{(}(1-\alpha_{2})B_{1}\big{\|}\tilde{\mathbf{A}}\big{\|}_{ \infty}+\alpha_{2}c_{X}c_{W}\big{)}C_{2},\] \[L_{1} =2\bigg{(}2+\frac{c_{W}^{2}\beta_{2}^{2}}{C_{2}^{2}}\bigg{)}B_{2}^ {2}, \tag{5}\] \[L_{2} =2(1-\alpha_{2})^{2}\beta_{1}^{2}c_{W}^{2}\big{\|}\tilde{\mathbf{A }}\big{\|}_{\infty}^{2}\bigg{(}\frac{B_{1}^{2}C_{2}^{2}}{C_{1}^{2}}\bigg{)}.\] _The Lipschitz continuity constant is \(L_{\mathrm{GCNII}}=\sqrt{L_{1}+L_{2}}\)._ Proposition 4.12 shows that \(L_{\mathrm{GCNII}}\) is a function of \(\{\alpha_{i}\}_{i=1}^{2}\) and \(\{\beta_{i}\}_{i=1}^{2}\). Finding the optimal value of \(L_{\mathrm{GCNII}}\) is a quadratic programming problem with constrain \(\alpha_{1},\alpha_{2}\in[0,1]\) and \(\beta_{1},\beta_{2}\in[0,1]\). Now we discuss a special case that \(\alpha_{1}=\alpha_{2}=0\) and \(\beta_{1}=\beta_{2}=0\). In this case, we have \(L_{1}=4c_{X}^{2}c_{W}^{2}\big{\|}\tilde{\mathbf{A}}\big{\|}_{\infty}^{4}\) and \(L_{2}=0\), which implies that \(L_{\mathrm{GCNII}}=L_{\mathrm{GCN}}\). Note that the optimal value of \(L_{\mathrm{GCNII}}\) is no larger than any value of objective function over the feasible region. Therefore, we conclude that the value of \(L_{\mathrm{GCNII}}\) is no higher than \(L_{\mathrm{GCN}}\). This result is not surprise, since GCNII is a special GCN model under this setting. For proper value of \(\{\alpha_{i}\}_{i=1}^{2}\) and \(\{\beta_{i}\}_{i=1}^{2}\), GCNII could achieve smaller generalization gap than GCN. As GCNII can achieve lower training error by relieving the over-smoothing problem, Proposition 4.12 indicates that GCNII can achieve superior performance when hyperparameters are set properly. Due to the involve of \(\{\alpha_{i}\}_{i=1}^{2}\) and \(\{\beta_{i}\}_{i=1}^{2}\), the growth rate of \(L_{\mathrm{GCNII}}\) is much smaller than \(L_{\mathrm{GCN}}\) when propagation depth increases, which makes GCNII maintain generalization capability and achieve stale performance (See Section 5). **SGC.** The work (Wu et al., 2019) proposes to remove all the nonlinear activation in GCN. To facilitate comparison with GCN, we consider a two layers SGC model, whose propagation is given by \[\hat{\mathbf{Y}}=\mathrm{softmax}\big{(}g(\tilde{\mathbf{A}})\mathbf{X}\mathbf{W }_{1}\mathbf{W}_{2}\big{)}, \tag{6}\] where \(g(\tilde{\mathbf{A}})=\tilde{\mathbf{A}}^{2}\). \(\mathbf{W}_{1}\in\mathbb{R}^{d\times h}\) and \(\mathbf{W}_{2}\in\mathbb{R}^{h\times|\mathcal{Y}|}\) is the parameter. **Proposition 4.13**.: _Suppose Assumption 3.1, 3.2, and 3.4 hold, then the objective \(\ell(\mathbf{w};z)\) is \(L_{\mathcal{F}}\)-Lipschitz continuous and Holder smooth w.r.t. \(\mathbf{w}=[\mathrm{vec}\left[\mathbf{W}_{1}\right];\mathrm{vec}\left[ \mathbf{W}_{2}\right]]\). Specifically, the Lipschitz continuity constant \(L_{\mathcal{F}}\) is \(L_{\mathrm{SGC}}=2c_{X}c_{W}\big{\|}\tilde{\mathbf{A}}^{2}\big{\|}_{\infty}\)._ Since \(\left\|\tilde{\mathbf{A}}^{2}\right\|_{\infty}\leq\left\|\tilde{\mathbf{A}} \right\|_{\infty}^{2}\), we have \(L_{\mathrm{SGC}}\leq L_{\mathrm{GCN}}\). Surprisingly, this simple linear model can achieve better smaller generalization gap than those nonlinear models (Kipf and Welling, 2017; Chen et al., 2020; Gasteiger et al., 2019; Chien et al., 2021), even though its representation ability is inferior than them. Note that the performance on test samples is determined by both training error and generalization gap. If linear GNNs can achieve a small training error, it is natural that they can achieve comparable and even better performance than nonlinear GNNs on test samples. Therefore, Proposition 4.13 reveals why linear GNNs achieve better performance than nonlinear GNNs from learning theory, as observed in recent works (Wu et al., 2019; Zhu and Koniusz, 2021; Wang et al., 2021). Considering the efficiency and scalability of linear GNNs on large-scale datasets, we believe that they have much potential to be exploited. **APPNP.** Multi-scale features are aggregated via personalized PageRank schema in (Gasteiger et al., 2019). Formally, the feature propagation process is formulated as \[\hat{\mathbf{Y}}=\mathrm{softmax}\big{(}g(\tilde{\mathbf{A}})\sigma(\sigma( \mathbf{X}\mathbf{W}_{1})\mathbf{W}2)\big{)}, \tag{7}\] where \(g(\tilde{\mathbf{A}})=\sum_{k=0}^{K-1}\gamma(1-\gamma)^{k}\tilde{\mathbf{A}}^ {k}+(1-\gamma)^{K}\tilde{\mathbf{A}}^{K}\). \(\mathbf{W}_{1}\in\mathbb{R}^{d\times h}\) and \(\mathbf{W}_{2}\in\mathbb{R}^{h\times|\mathcal{Y}|}\) are the parameters. **Proposition 4.14**.: _Suppose Assumption 3.1, 3.2, and 3.4 hold, then the objective \(\ell(\mathbf{w};z)\) is \(L_{\mathcal{F}}\)-Lipschitz continuous and Holder smooth w.r.t. \(\mathbf{w}=\left[\mathrm{vec}\left[\mathbf{W}_{1}\right];\mathrm{vec}\left[ \mathbf{W}_{2}\right]\right]\). Concretely, the Lipschitz continuity constant \(L_{\mathcal{F}}\) is \(L_{\mathrm{APPNP}}=2c_{X}c_{W}\big{\|}g(\tilde{\mathbf{A}})\big{\|}_{\infty}\)._ The Lipschitz continuity constant in Proposition 4.14 is positively related to the infinity matrix norm of the polynomial spectral filter. According to (Gasteiger et al., 2019), \(\gamma\) is commonly set to be a small number, yielding that \(\big{\|}g(\tilde{\mathbf{A}})\big{\|}_{\infty}<\left\|\tilde{\mathbf{A}} \right\|_{\infty}\) holds. Thus, the Lipschitz continuity constant of APPNP is smaller than that of GCN, indicating that APPNP may achieve smaller generalization gap than GCN. Besides, \(K\) also affects the value of \(\|g(\tilde{\mathbf{A}})\|_{\infty}\), and a larger \(K\) may yield a larger generalization gap. Therefore, \(K\) is usually set as a proper value to guarantee a trade-off between expressive ability and generalization performance. **GPR-GNN.** Compared with APPNP, the fixed coefficients are replaced by learnable weights in (Chien et al., 2021), in order to adaptively simulate both high-pass and low-pass graph filters. The feature propagation process is \[\hat{\mathbf{Y}}=\big{(}g(\tilde{\mathbf{A}},\mathbf{\gamma})\sigma(\sigma( \mathbf{X}\mathbf{W}_{1})\mathbf{W}2)\big{)}, \tag{8}\] where \(g(\tilde{\mathbf{A}},\mathbf{\gamma})=\sum_{k=0}^{K}\gamma_{k}\tilde{\mathbf{A}}^ {k}\). \(\mathbf{W}_{1}\in\mathbb{R}^{d\times h}\), \(\mathbf{W}_{2}\in\mathbb{R}^{h\times|\mathcal{Y}|}\) and \(\mathbf{\gamma}\in\mathbb{R}^{K+1}\) are the parameters. **Proposition 4.15**.: _Suppose Assumption 3.1, 3.2, and 3.4 hold, then the objective \(\ell(\mathbf{w};z)\) is \(L_{\mathcal{F}}\)-Lipschitz continuous and Holder smooth w.r.t. \(\mathbf{w}=\left[\mathrm{vec}\left[\mathbf{W}_{1}\right];\mathrm{vec}\left[ \mathbf{W}_{2}\right];\mathbf{\gamma}\right]\). Concretely, the Lipschitz continuity constant \(L_{\mathcal{F}}\) is \(L_{\mathrm{GPR}}=\sqrt{L_{1}^{2}+L_{2}^{2}}\), where_ \[L_{1} =\sqrt{2}c_{X}c_{W}^{2}\bigg{(}\sum_{k=0}^{K}\big{\|}\tilde{ \mathbf{A}}^{k}\big{\|}_{\infty}\bigg{)}, \tag{9}\] \[L_{2} =2c_{X}c_{W}\big{\|}g(\tilde{\mathbf{A}},\mathbf{\gamma})\big{\|}_{ \infty}.\] Note that \(L_{2}\) has similar form with \(L_{\mathrm{APPNP}}\) (the only difference lie on the definition of \(g(\tilde{\mathbf{A}},\mathbf{\gamma})\)). Assume that \(g(\tilde{\mathbf{A}},\mathbf{\gamma})=g(\tilde{\mathbf{A}})\) and note that \(L_{\mathrm{GPR}}=\sqrt{L_{1}^{2}+L_{2}^{2}}\geq L_{2}\), we have \(L_{\mathrm{GPR}}\geq L_{\mathrm{APPNP}}\). Besides, since there is no constraint on \(\gamma\), the value of \(\big{\|}g(\tilde{\mathbf{A}},\mathbf{\gamma})\big{\|}_{\infty}\) may be larger when the norm of \(\mathbf{\gamma}\) is large, resulting in larger generalization gap than APPNP. Therefore, adopting regularization technique on the learnable coefficients to restrict the value of \(\big{\|}g(\tilde{\mathbf{A}},\mathbf{\gamma})\big{\|}_{\infty}\) is necessary. To summarize, \(L_{\mathcal{F}}\) and \(P_{\mathcal{F}}\) are determined by the feature propagation process and graph-structured data. Estimating these constants precisely is challenging (Virmaux and Scaman, 2018; Fazlyab et al., 2019), and the upper bounds we provided are sufficient to reflect the realistic generalization gap of these models (See Section 5 for more detail). Besides, we have to emphasize that results for GCN and GCNII with more than two layers can be derived by similar techniques, yet it requires more tedious computation. Exploring new techniques to estimate these constants conveniently and precisely are left for future work. ## 5 Experiments **Experimental Setup.** We conduct experiments on widely adopted benchmark datasets, including Cora, Citeseer, and Pubmed (Sen et al., 2008; Yang et al., 2016). The accuracy and loss gap (_i.e._, the absolute value of difference between the loss (accuracy) on training and test samples) are used to estimate the generalization gap. Following the standard transductive learning setting, in each run, \(30\%\) sampled nodes determined by a random seed are used as training set and the rest nodes are treated as test set. The number of iterations is fixed to \(T=300\). We independently repeat the experiments for \(10\) times and report the mean value and standard deviations of all runs. Please see Appendix C for more detailed settings. **Experimental Results.** The loss and accuracy comparisons are presented in Table 3 and Table 1, respectively. We have the following observations: (1) SGC and APPNP have smaller loss and accuracy gap than other model including GCN, which is consistent with the analysis in Proposition 4.13. Besides, the test accuracy of SGC surpass GCN on Citeseer. Thus, the reason why linear models sometimes perform is due to their smaller lipschitz continuity constants. (2) Compared with GCN, GCNII achieves smaller loss and accuracy gap with the same number of layers. We further estimate the generalization performance of GCN and GCNII with six layers (denoted as GCN* and GCNII*). Interestingly, with the increase of the number of hidden layers, the generalization performance of GCN decreases sharply. On the contrary, the loss and accuracy gap of GCNII remain unchanged. The test accuracy of GCNII also remain unchanged or only drops slightly. Therefore, the superior performance of GCNII comes from two perspectives: the first is learning non-degenerated representations by relieving over-smoothing and the second is robust generalization gap against the increase of the number of layers. (3) Although GPR-GNN achieve a competitive test accuracy, it has higher accuracy and loss gap than APPNP. Therefore, the unconstrained learning coefficients improve the fitting ability but also weaken generalization capability. Designing weight learning schema to balance the expressive and generalization could be a direction for spectral-based GNNs. (4) The generalization performance of GAT is slightly worse than GCN. Note that GAT is designed for inductive learning while our experimental setting is transductive. Thus, the superiority of GAT is not so obvious. Besides, loss value of GCN on training and test samples w.r.t. iterations are presented in Figure 1. It can be seen that the loss gap increases with the increase of iterations, as demonstrated by Theorem 4.3. In general, the theoretical results are supported by the experimental results. It is worth pointing out that our analysis is only oriented to generalization gap. Smaller generalization gap does not necessarily mean better generalization ability, since the performance on test samples are determined by both training error and generalization gap. ## 6 Discussion and Conclusion In this paper, we establish high probability learning guarantees for transductive SGD, by which the upper bound of generalization gap for some popular GNNs are derived. Experimental results on benchmark datasets support the theoretical results. This work sheds light on understanding the generalization of GNNs and provide some insights in \begin{table} \begin{tabular}{l c c c} \hline \hline & Cora & Citeseer & Pubmed \\ \hline GCN & 9.76\(\pm\)1.15 & 22.11\(\pm\)1.26 & 1.08\(\pm\)0.52 \\ GCN* & 13.45\(\pm\)1.28 & 26.48\(\pm\)1.21 & 1.49\(\pm\)0.63 \\ GAT & 11.00\(\pm\)0.75 & 22.69\(\pm\)0.84 & 1.52\(\pm\)0.43 \\ GCNII & 7.69\(\pm\)1.48 & 14.85\(\pm\)0.80 & 0.88\(\pm\)0.52 \\ GCNII* & 6.24\(\pm\)1.59 & 13.49\(\pm\)1.39 & 0.80\(\pm\)0.50 \\ SGC & 5.33\(\pm\)1.58 & 11.50\(\pm\)1.09 & 0.73\(\pm\)0.54 \\ APPNP & 7.72\(\pm\)1.54 & 9.99\(\pm\)1.17 & 0.85\(\pm\)0.46 \\ GPR-GNN & 8.90\(\pm\)1.22 & 19.08\(\pm\)0.95 & 0.96\(\pm\)0.49 \\ \hline \hline \end{tabular} \end{table} Table 1: Accuracy gap comparison of different baseline models on Cora, Citeseer and Pubmed. \begin{table} \begin{tabular}{l c c c} \hline \hline & Cora & Citeseer & Pubmed \\ \hline GCN & 85.91\(\pm\)0.53 & 71.78\(\pm\)0.72 & 85.29\(\pm\)0.19 \\ GCN* & 82.49\(\pm\)0.59 & 66.74\(\pm\)1.07 & 84.21\(\pm\)0.26 \\ GAT & 86.10\(\pm\)0.51 & 72.90\(\pm\)0.65 & 85.45\(\pm\)0.26 \\ GCNII & 82.85\(\pm\)2.17 & 73.61\(\pm\)0.64 & 84.70\(\pm\)0.24 \\ GCNII* & 82.85\(\pm\)2.44 & 72.89\(\pm\)0.96 & 83.67\(\pm\)0.46 \\ SGC & 82.39\(\pm\)2.48 & 74.37\(\pm\)0.56 & 82.00\(\pm\)0.27 \\ APPNP & 79.14\(\pm\)3.17 & 74.12\(\pm\)0.62 & 82.86\(\pm\)0.29 \\ GPR-GNN & 87.24\(\pm\)0.71 & 73.79\(\pm\)0.67 & 85.07\(\pm\)0.34 \\ \hline \hline \end{tabular} \end{table} Table 2: Test accuracy comparison of different baseline models on Cora, Citeseer and Pubmed. Figure 1: The loss value of GCN on training and test samples with the increase of iterations. \begin{table} \begin{tabular}{l c c c} \hline \hline & Cora & Citeseer & Pubmed \\ \hline GCN & 0.30\(\pm\)0.03 & 0.77\(\pm\)0.04 & 0.03\(\pm\)0.01 \\ GCN* & 0.91\(\pm\)0.18 & 2.12\(\pm\)0.16 & 0.05\(\pm\)0.01 \\ GAT & 0.29\(\pm\)0.03 & 0.65\(\pm\)0.02 & 0.03\(\pm\)0.01 \\ GCNII & 0.19\(\pm\)0.03 & 0.43\(\pm\)0.02 & 0.02\(\pm\)0.01 \\ GCNII* & 0.16\(\pm\)0.03 & 0.43\(\pm\)0.03 & 0.02\(\pm\)0.01 \\ SGC & 0.12\(\pm\)0.03 & 0.28\(\pm\)0.02 & 0.01\(\pm\)0.00 \\ APPNP & 0.16\(\pm\)0.03 & 0.25\(\pm\)0.02 & 0.01\(\pm\)0.00 \\ GPR-GNN & 0.24\(\pm\)0.03 & 0.55\(\pm\)0.02 & 0.02\(\pm\)0.00 \\ \hline \hline \end{tabular} \end{table} Table 3: Loss gap comparison of different baseline models on Cora, Citeseer and Pubmed. designing new GNN architecture with both expressiveness and generalization capabilities. Although we have made efforts in generalization theory of GNNs, there are still some limitations in our analysis, which is left for future work to address: (1) The complexity based technique makes the dimension of parameters appearing in the bounds. Further research should focus on establishing dimension-independent bounds under milder assumption and deriving the lower bound that matches the upper bound. (2) We only analyze vanilla SGD in terms of optimization algorithms. Extending our results to SGD with momentum and adaptive learning rates is worth exploring. (3) Our analysis does not explicitly consider the heterophily of graphs. Deriving heterophily-dependent generalization bounds is an meaningful direction.
2303.14937
LEURN: Learning Explainable Univariate Rules with Neural Networks
In this paper, we propose LEURN: a neural network architecture that learns univariate decision rules. LEURN is a white-box algorithm that results into univariate trees and makes explainable decisions in every stage. In each layer, LEURN finds a set of univariate rules based on an embedding of the previously checked rules and their corresponding responses. Both rule finding and final decision mechanisms are weighted linear combinations of these embeddings, hence contribution of all rules are clearly formulated and explainable. LEURN can select features, extract feature importance, provide semantic similarity between a pair of samples, be used in a generative manner and can give a confidence score. Thanks to a smoothness parameter, LEURN can also controllably behave like decision trees or vanilla neural networks. Besides these advantages, LEURN achieves comparable performance to state-of-the-art methods across 30 tabular datasets for classification and regression problems.
Caglar Aytekin
2023-03-27T06:34:42Z
http://arxiv.org/abs/2303.14937v1
# LEURN: Learning Explainable Univariate Rules with Neural Networks ###### Abstract In this paper, we propose LEURN: a neural network architecture that learns univariate decision rules. LEURN is a white-box algorithm that results into univariate trees and makes explainable decisions in every stage. In each layer, LEURN finds a set of univariate rules based on an embedding of the previously checked rules and their corresponding responses. Both rule finding and final decision mechanisms are weighted linear combinations of these embeddings, hence contribution of all rules are clearly formulated and explainable. LEURN can select features, extract feature importance, provide semantic similarity between a pair of samples, be used in a generative manner and can give a confidence score. Thanks to a smoothness parameter, LEURN can also controllably behave like decision trees or vanilla neural networks. Besides these advantages, LEURN achieves comparable performance to state-of-the-art methods across 30 tabular datasets for classification and regression problems. ## 1 Introduction Although there is an immense amount of work on explainable artificial intelligence, a human explainable white box neural network still does not exist. The efforts in explainable artificial intelligence has been focused on saliency maps [44, 53, 56, 41, 11, 15, 54, 30, 13], approximation by explainable methods [21, 25, 42, 52, 16, 51], or hybrid models [31, 37, 32, 33, 39, 2, 29, 49, 50]. Many of these works employ some form of decision trees in order to reach explainability. This is due to the commonly assumed understanding that decision trees are more explainable than neural networks since they extract clear rules on input features. An interesting fact that have been pointed out by several studies, is that neural networks can be equivalently represented as decision trees [5, 55, 7, 34, 46]. This seems to conflict with trying to explain neural networks with decision trees, as they are decision trees themselves. However, the nature of the decision trees extracted by neural networks are different from commonly used ones in two aspects: they are multivariate and their branches grows exponentially with depth. Exponentially growing number of branches hurts global explainability as it may even become infeasible to store the tree. Although one may still focus on local (sample-wise) explanations, multivariate rules are much harder to explain than univariate ones, as they mix features. Finally, with increasing neural network depth, it becomes even harder to make sense of these rules as the contribution of each rule is not clear. Motivated by above observations, in this paper, we propose a special neural network architecture (LEURN) that provides an exact univariate decision tree where each rule contributes linearly to the next rule selection and the final decision. In the first block, LEURN directly learns univariate rules and outputs an embedding of the rules and response to the rules. Then a linear layer is applied on this embedding in order to find the next rule. In successive blocks, all embeddings from previous blocks are concatenated and given as input to a linear layer. Final layer takes all previous embeddings as input, consists of a single linear layer and an activation based on the application -sigmoid for binary classification, softmax for multiclass classification and identity for regression. Thus, there is a white-box rule selection and final decision making processes with clear contributions of previous rules in a linear manner. Besides human explainability, the proposed architecture provides additional properties such as: feature importance, feature selection, pairwise semantic similarity, generative usage, and confidence scoring. ## 2 Related Work ### Neural Networks for Tabular Data We have reviewed the difference of commonly used decision trees and neural network extracted ones in the previous chapter. This difference also shows itself in performance of neural networks and decision trees in tabular data. Particularly, in [43], [8] and [19], extensive comparisons of deep learning methods and tree-based methods were made. The comparison output tends to be in favor of tree-based methods in terms of both accuracy, training speed and robustness to dataset size. Following our discussion in previous section, the performance difference should come ei ther from the multivariate rules or the exponentially growing branches, as these are the only differences between neural network trees and common ones. This highly motivates us to evaluate LEURN on tabular data, as it removes one of these possibilities since it results into univariate trees. Hence, we believe the results of this evaluation is crucial to understand whether the performance gap comes from exponentially growing branches. Motivated by above, our application focus is structured data, hence we review deep learning based methods for tabular data next. Majority of works investigate feature transformation [18][47][57][10], transfer-learning [27][40], self-supervised learning [6][4][20], [45], attention [4][20][45], regularizations [23], univariate decisions [1][36] or explicitly tree-like models [22][35] in order to improve deep learning performance on tabular data. **Feature Transformation:** The approach in [47] suggests transforming tabular data into two-dimensional embeddings, which can be thought of as an image-like format. These embeddings are then utilized as the input for traditional convolutional neural networks. [57] and [10] also takes a similar approach. [18] finds embedding on numerical features are beneficial for deep learning methods. **Transfer Learning:** In [27], the authors highlight a key advantage of neural models over others: their ability to learn reusable features and to be fine-tuned in new domains. The authors show this via using upstream data. [40] also investigates pre-training strategies for tabular data. **Attention:**[4] uses sequential attention to select relevant features at each decision step. Tabtransformer [20] also makes use of self-attention. SAINT [45] applies attention differently: Both on rows (samples) and columns (features). [26] is another method that utilizes both row and column attention. **Self Supervised Learning:**[4][20][45][6] provide ways to incorporate self-supervised learning with unlabeled data to achieve to better performance especially in small dataset size regime. **Regularization:**[23] investigates the effects of regularization combinations on MLPs' tabular data performance and finds that with correct regularization, MLPs can outperform many alternatives. **Univariate Decisions:** The Neural Additive Models (NAMs) [1] are a type of ensemble consisting of multiple multilayer perceptrons (MLPs), each MLP being responsible for one input feature thus making univariate decisions. The output values are summed and passed through a logistic sigmoid function for binary classification, or just summed for regression. [36] propose a new sub-category of NAMs that uses a basis decomposition of shape functions. The basis functions are shared among all features, and are learned together for a specific task, making the model more scalable for large datasets with high-dimensional, sparse features. **Differentiable Explicit Tree Methods:** Although MLPs with piece-wise linear activation functions are already differentiable implicit trees [5], there have been efforts to create other special architectures that results into explicit trees as follows. [22] proposes a gating mechanism for feature representation with attention based feature selection and builds on differentiable non-linear decision trees. In [35], the authors present NODE, a deep learning architecture for tabular data that combines the strengths of ensembles of oblivious decision trees and end-to-end gradient-based optimization with multi-layer representation learning. We find that univariate decisions and differentiable tree methods are promising in terms of interpretability and in line with our works. Yet, the extracted decision trees in the literature are either soft [35], or non-linear [22] and univariate decisions made in [1][36] are not still explainable. On the contrary, LEURN provides an explainable and exact univariate decision tree. Out of the above reviewed methods, the approach in [35] (NODE) is the most similar to ours. Hence we find it important here to highlight some key differences. In NODE, each layer contains multiple trees where each tree is trained via differentiable feature selection and decision making via entmax. Note that made decisions and feature selections are soft and there is no exact corresponding tree. In our proposed method, we make hard decisions via quantized \(tanh\) which results into exact univariate decision trees. Another difference is that our feature selection is not explicit, but implicit through learnable feature importances. In NODE,main computational power is spent on feature selection and thresholds for selected features are directly learned. Instead, in our work, we spend main computational power on learning thresholds/rules as weighted combinations of a special embedding of previous responses and previous thresholds. Most importantly, our method is able to explain exactly why a threshold was selected for a particular layer via linear contribution of previous thresholds and their responses. Moreover our method has additional properties such as providing semantic similarity, generative usage, confidence scoring, etc. ### Post-processing Explainers The linear contributions of previous rules to others and final decision in LEURN is similar to the additive feature attributions mentioned in [28]. LIME [38] and SHAP [28] stand out as popular approaches as they approximate additive feature importances per sample. These approaches are not novel neural network architectures, but they extract approximate explanations from existing neural networks. Thus their additive feature importances are local approximations. LEURN differs from these approaches, because it is a novel neural network architecture and not a post processing approximate explainer. Moreover, LEURN's additive contributions are exact, not approximations, as they are a built-in feature of the architecture. Finally, LIME and SHAP's additive contributions are only evident for features at decision level, whereas LEURN's additive contributions applies on every processing stage to intermediate rules on finding the next rule, as well as for the final decision. ## 3 Proposed Method ### Decision Tree Analysis of Vanilla Neural Networks A vanilla neural network with \(n\) layers can be formulated in Eq. 1. \[\begin{split} NN(\textbf{x})=\mathbf{W}_{n-1}^{T}\sigma(\mathbf{W}_{n-2}^ {T}\sigma(...\mathbf{W}_{1}^{T}\sigma(\mathbf{W}_{0}^{T}\mathbf{x}+\mathbf{\beta}_{0})\\ +\mathbf{\beta}_{1}...)+\mathbf{\beta}_{n-2})+\mathbf{\beta}_{n-1}\end{split} \tag{1}\] In Eq. 1, \(\mathbf{W}_{i}\) and \(\mathbf{\beta}_{i}\) are respectively the weight matrix and bias vector of a network's \(i^{th}\) layer, \(\sigma\) is an activation function, and **x** the input. Let us consider that \(\sigma\) is a piece-wise linear activation. Then, a layer makes decisions based on whether linear combinations of input features are larger than a set thresholds. This fact was used to extract decision trees from neural networks in [5, 55, 7, 34, 46]. The extracted decision trees differ from commonly used ones in two aspects, which we review as follows. ### Exponentially Growing Branches Neural networks extract exponentially growing width trees due to shared rules embedded as layer weights. In each layer \(i\), the input's response to a rule is checked per filter \(j\) and the effective operation depends which region of activation the response falls into. This results into \(k^{\sum_{i}m_{i}}\) possible processing paths, where \(k\) is the total number of linear regions in the activation and \(m_{i}\) is the filter number in layer \(i\). This massive tree size hurts explainability in the global sense as it prevents to see all the decision mechanism at once. With today's very deep architectures it even becomes infeasible to store neural network extracted trees. But, this feature doesn't prevent local explainability where the goal is to understand the decision mechanism of the neural network per sample. At this point, we would also like to mention that exponential partitioning via shared rules has close connections to extrapolation, thus generalization [5], hence this may be a favorable property to keep. ### Multivariate Decisions The decision rules of neural networks are multivariate. This is obvious as the checked rule per filter is whether a linear combination of all features is larger than a value indicated by negative bias for that filter. The set of multivariate rules extracted by vanilla neural networks are difficult to make sense of, as they mix features. This is especially true for large number of features which is usually the case for modern neural networks. Moreover, multivariate decisions make identification of redundant rules very difficult. As stated in [5], neural network extracted decision trees may consist of redundant rules, and the real depth/width may actually be a lot smaller than exponential formulation provided in previous subsection. However, checking whether a set of multivariate rules encapsulates another set is actually very difficult compared to univariate rules. Thus, in this work we wish to avoid this property of neural networks. ### LEURN: Learning Univariate Rules by Neural Networks In this section, we propose a special neural network architecture that results into univariate rules, while keeping generalization abilities of neural networks. We will refer to this architecture as LEURN. The main idea in LEURN is to learn univariate rules in the form of thresholds in each layer to be directly applied to the batch-normalized (\(BN\)) neural network input \(\mathbf{x}\). In layer \(0\), we directly learn a rule vector \(\mathbf{\tau}_{0}\) elements of which are separate rules for each input variable. To find the rule vector for next layers, we employ the following. For a layer \(i\), first we find an indicator vector \(\mathbf{s}_{i}\) which indicates the category of each input feature \(j\) with respect to the rule \(\tau_{ij}\). This is achieved via a quantized \(tanh\) activation which extracts thresholds around \(\mathbf{\tau}_{i}\), and outputs unique values for each category whose boundaries are indicated by these thresholds. Then, we find an embedding \(\mathbf{e}_{i}\) by element-wise multiplication of the indicator vector \(\mathbf{s}_{i}\) with the activated threshold vector \(tanh(\mathbf{\tau}_{i})\). This jointly encodes the thresholds used in the layer and how the input responded to them. The next thresholds \(\mathbf{\tau}_{i+1}\) are learned by a linear layer (FC) applied on the concatenated embeddings from previous layers \(\mathbf{e}_{0:i}\). LEURN's rule learning is formulated in Eq. 2 and visualized in Fig. 1. \[\begin{split}\mathbf{s}_{i}=qtanh(BN(\mathbf{x})+\mathbf{\tau}_{i})\\ \mathbf{e}_{i}=\mathbf{s}_{i}tanh(\mathbf{\tau}_{i})\\ \mathbf{\tau}_{i+1}=\mathbf{W}_{i}^{T}\mathbf{e}_{0:i}+\mathbf{\beta}_{i}\end{split} \tag{2}\] The output of the neural network \(\mathbf{y}\) is calculated from all embeddings as follows. \[\mathbf{y}=\gamma(\mathbf{W}_{n-1}^{T}\mathbf{e}_{0:n-1}+\mathbf{\beta}_{n-1}) \tag{3}\] In Eq. 3, \(\gamma\) is the final activation of the neural network. This can be sigmoid or softmax for binary or multi-label classification, or simply identity for regression problems. In summary, LEURN makes univariate decisions via \(\tau\) rules and quantized \(tanh\) with number of branches equal to \(tanh\) quantization regions. For every branch, a new rule or final decision is found via weighted linear combination of embeddings, so contribution of each rule and response is additive. #### 3.2.1 Properties Next, we make a few observations about LEURN. **Equivalent Univariate Decision Trees**: LEURN results into univariate decision trees. For an input that is \(n\)-dimensional, in each layer there are \(k^{n}\) possible indicator vectors which corresponds to \(k^{n}\) branches per layer, where \(k\) is the number of regions in quantized \(tanh\). Each branch is separated into another \(k^{n}\) branches in the next layer. Many of these branches are very unlikely to be utilized during training due to the limited number of data, so this rule sharing property helps generalization in inference time as the rules in unseen categories are made up from the seen ones. Note that this final property is a general property of vanilla neural networks as well. **Explainability**: LEURN is more explainable compared to vanilla neural networks. In vanilla neural networks, the rules in the equivalent decision tree are in the form of multivariate inequalities, whereas in LEURN they are in the form of univariate inequalities. Univariate inequalities are easier to make sense for humans. LEURN is in this sense a white box which makes explanations in the following hypotethical example: Model checked and found that price/earning ratio (PER) of a company is smaller than 20 and operating income increase (OII) was more than 5\(\%\). Based on this outcome, model then checked and found that PER of the company is larger than 15 and OII was less than 10\(\%\). Therefore, model decided to invest 1k$ on the company. Contribution of each rule-check to final invested money was as follows: PE ratio being smaller than 20: +10008, operating income increase being more than 5\(\%\): +10008, PE ratio being larger than 15: -5008, operating income increase being less than 10\(\%\): -5008. As a higher granularity of explainability, LEURN also provides how the PE\(<\)20 and OII\(>\)5 rules linearly contributed to finding PE\(>\)15 and OII\(<\)10 rules. **Easy Architecture Search**: The last linear layer has fixed output units defined by the problem, the rest of the linear layers also have fixed output units which is equal to neural network input dimensions. This makes neural architecture search easier as there are no hyperparameters in the form of number of features in a layer. The architecture related hyperparameters are only the depth of the network and the number of quantization regions for \(tanh\). **Feature Selection**: LEURN can do feature selection as follows. Each embedding element is directly related to a particular input feature. The embedded information is critical in finding next rules or in the output of the neural network. Hence, if absolute value of the unit in \(\mathbf{W}_{i}\) that corresponds to a particular feature is or close to zero, it is a clear indicator that the particular feature is uninformative, hence not selected in that layer. **Feature Importance**: LEURN can extract global feature importance. Last layer's input is all the embeddings used throughout the neural network. Feature importance can then be measured simply by checking the weighted (via \(\mathbf{W}_{n-1}\)) contribution of each embedding element -hence related feature and rule- in the classification, averaged over the training set. **Pairwise Semantic Similarity**: LEURN can measure semantic similarity between two samples via using any popular distance metric on the embeddings of these samples. **Generative Usage**: LEURN can be directly used in a generative manner. A training sample can be fed to the neural network and its univariate rules can be extracted. This simply defines a category that belongs to that sample in the form of upper and lower limits for each feature in the input. Then, one can generate a different sample from the same category by sampling randomly from the univariate rules per feature. This is very difficult to do for vanilla neural networks, as it is harder to sample from multivariate inequalities. **Parametrized Decision Smoothness**: Finally LEURN provides a controlled smoothness in model predictions based on the number of quantized regions \(k\) in the \(qtanh\) function. As \(k\) grows, decision boundaries become smoother, this hyperparameter is useful to provide alternatives to different datasets based on their properties. Figure 1: LEURN’s Rule Learning ## 4 Experimental Results ### Preliminary Experiments on Toy Data #### 4.1.1 Parametrized Decision Smoothness In [19], authors stated three key differences between neural network based methods and tree-based methods. These were: decision boundary smoothness, robustness to uninformative features and rotation invariance. In this section, we experiment on the behaviour of LEURN on a toy dataset and observe that with different \(qtanh\) quantization regions \(k\), LEURN behaves differently in terms of the above three aspects. For all experiments in this section, we use Half Moon dataset with 10000 training samples and 1000 validation samples. Accuracies are reported on best validation performance in a training. First, we investigate decision boundary smoothness. For all LEURNs, we have used fixed depth of \(d=2\) and experimented on \(qtanh\) quantization regions in following set: \(k\in\{2,5,10,20\}\). As it can be observed from Fig. 2, as the quantization regions grow, decision boundary becomes smoother. Note that this result is trivially evident without this experiment, but we still provide these figures for completeness. Second, we rotate the Half Moon dataset with angles in the following set: \(\{0,11.25,22.5,45\}\). We average results of 10 trainings for each quantization region in: \(\{2,5,10,20\}\). To provide a challenging case, we have fixed network depth to 1 in this experiment. As it can be seen from Table 1, as the quantization regions grow, LEURN becomes more robust to rotation, which is a vanilla neural networks-like behaviour according to [19]. Smaller quantization regions struggle to be robust to rotation, similar to decision tree behaviour, according to [19]. Finally, we experiment on uninformative features. We add 10 additional randomly generated features to Half Moon dataset in order to provide a challenging case with dominating uninformative features. We have used a fixed depth of 3 in this experiment so that all variants are able to provide perfect score without uninformative features. We have observed that all LEURN variants with different quantization regions can successfully handle uninformative features by providing perfect score on validation set. We conclude from these three experiments that, LEURN provides smoother boundaries (MLP-like) with rotation invariant performance for higher quantization regions, and sharper boundaries with rotation variant performance for lower quantization regions (DT-like). For all cases, LEURN was found to be robust to uninformative features. #### 4.1.2 Global Feature Importance We repeat our last experiment in previous section with \(k=2,d=2\). We measure the feature importance as the contribution in the final classification. The process is as follows. The input of the last layer is a concatenation of previous embeddings used throughout LEURN. Each embedding unit corresponds to an information from a particular feature. Thus the value of the multiplied embedding unit with corresponding weight unit in the last layer is treated as that feature's importance. As there are multiple embeddings per feature, we first sum each contributions, then take absolute value of the sum to get the final importance value. Note that these values are calculated over the training set. With this method, we find that informative features in the last experiment, are given at least 4.34 times more importance than uninformative features. Feature importance can also be calculated in all intermediate fully connected layers. It can also be calculated locally, i.e. for a particular sample instead of over entire training set. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline k & 2 & 5 & 10 & 20 \\ \hline \(\mu\) & 96.79 & 98.44 & 98.76 & 99.01 \\ \hline \(\sigma\) & 3.22 & 1.91 & 1.64 & 1.26 \\ \hline \end{tabular} \end{table} Table 1: Mean and standard deviances of LEURNs with different \(qtanh\) quantization regions, across different rotations. Figure 2: Decision Boundaries of learned LEURN models with different \(qtanh\) quantization regions \(k\), for Half Moon dataset. #### 4.1.3 Semantic Similarity, Generative Usage and Confidence Scores In this section we use the RBF kernel response of a pair of LEURN embeddings as a similarity score between corresponding pair of samples. We use Half Moon dataset with several \(k\) and \(d\) hyperparameter choices. In Fig. 3, we select a reference sample from training dataset indicated by the green dot, and visualize semantically similar regions with red dots, we set opacity parameter to similarity value, hence stronger red dots means more similar samples to the reference. Smaller \(k\) and \(d\) results into more distinct and local categories where everything in the category is strongly similar to the reference and similarity sharply stops in category boundaries. On the contrary, larger \(k\) and \(d\) results into distributed and non-local similarities. One can select the desired behaviour based on application. As LEURN generates univariate trees, there is a distinct category for each sample, this category can be easily found via storing the thresholds that are employed throughout LEURN and finding closest upper and lower boundaries from these thresholds. Note that these thresholds are found through \(\tau\) vectors and \(qtanh\) quantization regions. Every sample in these boundaries have exactly 0 embedding distance, hence 1 similarity to the reference sample. An example of generated samples can be ovserved in Fig 4. We observe that \(h\) and \(d\) grows, the regions gets tighter. This is expected due to the fact that both \(k\) and \(d\) partitions the space into more regions. At first, this can be confused to conflict results of Fig. 3, however we remind the reader that most red dots in Fig. 3 do not have exactly 1 similarity, thus -although semantically similar- not exactly in same category with reference sample. Moreover some regions may vary depending on the category that each training finds due to the random initialization. The embedding similarity can also be used to assign confidence scores to predictions in test set. We simply find the maximum similarity that a test sample has to training dataset in the nearest neighbour sense. This value is directly used as a confidence score. In Fig. 5, we visualize confidence scores via setting confidence score to opacity. Similar to our other observations in this section, lower \(k\) and \(d\) values result into larger regions that have high confidence Figure 4: Generated samples from region that reference sample (green) falls into Figure 3: Embedding similarities to reference sample for LEURN models with different \(qtanh\) quantization regions \(k\) and depths \(d\), for Half Moon dataset. whereas higher \(k\) and \(d\) values result into tighter categories, thus lower confidence scores in extrapolated regions. ### Explainability As observed from Eq. 2 and 3, contribution of each embedding in finding the next threshold and the final decision is simply linear. As an embedding is a scalar which encodes a previous threshold and input response to it, it is possible to write linear weights of each previous threshold in finding next threshold according to the input response. There can be sometimes redundant thresholds found in intermediate layers. For the sake of clarity, we handle these as follows. We check whether a previous threshold is above/below the upper/lower limits that are found via previous thresholds. If so, this means that threshold is redundant. Then, the contribution of that threshold in the network is added to the contribution of the previous threshold that defines upper/lower limits. This provides a much clearer and human readable explanation. Note that it also keeps original processing with no alteration, it is just another way to reformulate the neural network/decision tree. Finally, we by-pass the threshold finding mechanisms in the neural network that results into redundant thresholds to further simplify the tree, while keeping it equivalent to the original one. Finally, we distribute the contribution of the biases to each rule equally. Next, we test explainability module on two cases: Half-Moon and Adult Census Income (OpenML [48]\(\#1590\)) datasets. #### Toy Data Next, we explain the decisions of a LEURN trained for the Half Moon dataset with \(k=2\) and \(d=2\). The output of the explanation described in the above paragraph is visualized in Fig. 6 which is the direct output of the explainer module. The corresponding univariate rules utilized in each layer of LEURN are visualized in Fig. 7. As it can be observed from Fig. 6 and Fig. 7, in the first layer, network checks rules for input features separately. Note that these rules \((x,1.018),(y,0.498)\) are same for any sample/category and these are the only directly learned thresholds in the network. Based on the position of our sample (indicated by green dot in Fig. 7) w.r.t to these rules, network decides to check other rules: \((x,0.181),(y,-0.490)\) respectively. This decision is sensible, since from the position of our sample, it is not clear to which class it belongs to with previous thresholds. The new thresholds tighten the boundary, but still are not sufficient due to a few still-included blue samples in the bounded region. So finally, \((x,-0.998)\) rule is found which is enough to accurately classify our sample. We also check the contributions of each sensible threshold in the final classification score (last section of Fig. 6). Note that positive score means orange class here. \(x<0.181\) results into a negative contribution in class orange. This is meaningful, because most samples at this region belongs to blue class. \(y<-0.490\) results into a positive contribution in class orange. This is meaningful, because most samples at this region belongs to orange class. \(x>-0.998\) results into a positive contribution in class orange. This is meaningful, because all samples at this region belongs to orange class. It is also interesting to examine how the previous thresholds contributed to finding the new thresholds. Let us examine the case in Layer 1. \(x<0.181\) pushes to check \((x,-0.009)\) rule whereas \(y<-0.490\) pushes to check \((x,-0.989)\) rule. Note that both are separately enough to classify our sample. A limitation of the method in terms of explainability is that the rules applied on the input features and their linear contribution to either finding the next rule or making the final decision still needs human effort for interpreta Figure 5: Confidence scores of samples Figure 6: Explanation of LEURN for Half-Moon Dataset tion/speculation. For example, there is no explicit explanation given by the network for why \(y<-0.490\) results into a positive contribution in class orange, but we interpret/speculate it. Still, LEURN simplifies the interpretability problem to an unprecedentedly easy level for humans. We believe best use of LEURN is to assist a human expert in the application field where the human expert is required to verify and make sense of the explanations that LEURN makes in the form of Fig. 6. ### Adult Census Income Dataset In this section we train LEURN on Adult Income Census dataset. Adult Income Census dataset contains samples having features of an adult's background, and the task is to predict whether the adult earns more than 50000$. In Table 2, we provide the rules and explanations LEURN finds for a sample. We also compare the additive feature contributions of LEURN and SHAP. Note that SHAP is applied to trained LEURN model. First we analyze individual feature contributions. The sample is classified as earning more than 50000$ and the largest contributing positive factor was executive managerial job title which makes excellent sense. Other contributing factors were: Education number, Bachelor degree education and married civilian marital status which are sensible. Note that first two are repeating features in the dataset, and both are given high importance which is consistent. An interesting observation is that LEURN finds redundant rules for final weight (fnlwgt) and capital gain features as the found rules that sample belong to exceeds the minimum and maximum values for these features in the dataset. The contributions coming from these redundant rules may be considered as a bias of the particular category. Finally, we observe a non-sensible contribution coming from capital loss. Although the capital loss interval here points towards no loss case, the model assigns a negative contribution here which maybe does not make sense. It is important to note \begin{table} \begin{tabular}{|c|c|c|c|} \hline Feature & LEURN Rule & LEURN Scores & SHAP Scores \\ \hline \hline workclass & State-gov & 0.0131 & -0.2027 \\ \hline education & Bachelors & 0.1897 & -0.2514 \\ \hline marital-status & Married-civ-spouse & 0.1719 & 0.7643 \\ \hline occupation & Exec-managerial & 0.2967 & 0.1796 \\ \hline relationship & Husband & 0.0985 & 0.2535 \\ \hline race & White & 0.0117 & -0.0982 \\ \hline sex & Male & 0.0162 & 0.1020 \\ \hline native-country & USA & 0.1193 & -0.9945 \\ \hline age & \(71.75>X>66.79\) & 0.1234 & 0.9786 \\ \hline fnlwgt & \(14254151.00>X>-612719168.00\) & -0.0034 & -0.0244 \\ \hline education num & \(13.13>X>12.83\) & 0.2005 & 0.5779 \\ \hline capital-gain & \(7433545.31>X>-1413678.12\) & -0.0882 & -0.2000 \\ \hline capital loss & \(238.01>X>-10163.00\) & -0.0609 & -0.0912 \\ \hline hours per week & \(40.52>X>39.91\) & -0.0342 & 0.0619 \\ \hline \end{tabular} \end{table} Table 2: LEURN Rules, LEURN Scores and SHAP Scores assigned to features of a sample from Adult Income Census Dataset Figure 7: Utilized univariate rules in every layer. here that, it is a very important feature to present these rules to humans as non-sensible decisions such as this particular one can be directly visible during monitoring. Although in this particular case this does not alter the final decision, it may be important in some other scenarios. As the LEURN scores are directly drawn from each feature's and rule's additive contribution in the final layer, they are exact. When this is considered, one would expect that SHAP would extract similar importances per feature. However, it is clearly evident from Table 2 that SHAP explanations deviate considerably from these exact contributions. Some key differences we see from Table 2 is that SHAP finds native country being USA is one of the biggest negative factors, whereas in reality contribution from that feature is positive. SHAP also shows that a Bachelor degree, a state government job and being white are negative indicators, whereas in LEURN's exact contributions they are positive. Another interesting observation is SHAP gives more importance to being married to a civilian spouse than executive managerial job title, which does not make sense. We observe similar behaviours across the dataset. ### Comparison to State-of-the-Art in Tabular Data In this section, we provide comparisons to popular methods for tabular data. The methods, datasets and evaluation procedure follows [45] as we use their code for retrieving the datasets, preprocessing and splits. Therefore, the performance of competitor methods have been copy-pasted from [45]. Compared methods are Random Forests [9], Extra Trees [17], k-NN [3], LightGBM [24], XGBoost [12], CatBoost [14], TabNet [4], TabTransformer [20] and SAINT [45]. In Tables 3, 4 and 5, one can find comparisons of LEURN with state-of-the-art methods. Following [45], we have used datasets that are available in OpenML [48], and used OpenML identifiers in the comparison tables. The scores are given in area under receiver operating characteristics curve, accuracy and root mean square error for binary classification, multiclass classification and regression problems respectively. \begin{table} \begin{tabular}{||c|c c c c c c c c c c|} \hline Method / OpenML ID & 188 & 1596 & 4541 & 40664 & 40685 & 40687 & 40975 & 41166 & 41169 & 42734 & Average \\ \hline \hline RandomForest & 0.653 & 0.953 & 0.607 & 0.951 & 0.999 & 0.697 & 0.967 & 0.671 & 0.358 & 0.743 & 0.760 \\ \hline ExtraTrees & 0.653 & 0.946 & 0.595 & 0.951 & 0.999 & 0.697 & 0.956 & 0.648 & 0.341 & 0.736 & 0.752 \\ \hline KNeighborsDist & 0.442 & 0.965 & 0.491 & 0.925 & 0.997 & 0.720 & 0.893 & 0.620 & 0.205 & 0.685 & 0.694 \\ \hline KNeighborsUnif & 0.422 & 0.963 & 0.489 & 0.910 & 0.997 & 0.739 & 0.887 & 0.605 & 0.189 & 0.693 & 0.689 \\ \hline LightGBM & 0.667 & 0.969 & 0.611 & 0.984 & 0.999 & 0.716 & 0.981 & 0.721 & 0.356 & 0.754 & 0.776 \\ \hline XGBoost & 0.612 & 0.928 & 0.611 & 0.984 & 0.999 & 0.730 & 0.984 & 0.707 & 0.356 & 0.752 & 0.766 \\ \hline CatBoost & 0.667 & 0.871 & 0.604 & 0.986 & 0.999 & 0.730 & 0.962 & 0.692 & 0.376 & 0.747 & 0.763 \\ \hline MLP & 0.388 & 0.915 & 0.597 & 0.992 & 0.997 & 0.682 & 0.984 & 0.707 & 0.378 & 0.733 & 0.737 \\ \hline TabNet & 0.259 & 0.744 & 0.517 & 0.665 & 0.997 & 0.275 & 0.871 & 0.599 & 0.243 & 0.630 & 0.580 \\ \hline TabTransformer & 0.660 & 0.715 & 0.601 & 0.947 & 0.999 & 0.697 & 0.965 & 0.531 & 0.352 & 0.744 & 0.721 \\ \hline SAINT & 0.680 & 0.946 & 0.606 & 1.000 & 0.999 & 0.735 & 0.997 & 0.701 & 0.377 & 0.752 & 0.779 \\ \hline LEURN & 0.644 & 0.963 & 0.595 & 0.995 & 0.997 & 0.768 & 0.994 & 0.654 & 0.343 & 0.746 & 0.769 \\ \hline \end{tabular} \end{table} Table 4: Comparison to state-of-the-art in Multi-Class Classification Datasets \begin{table} \begin{tabular}{||c|c c c c c c c c c c|} \hline Method / OpenML ID & 31 & 44 & 1017 & 1111 & 1487 & 1494 & 1590 & 4134 & 42178 & 42733 & Average \\ \hline \hline RandomForest & 0.778 & 0.986 & 0.798 & 0.774 & 0.910 & 0.928 & 0.908 & 0.868 & 0.840 & 0.670 & 0.846 \\ \hline ExtraTrees & 0.764 & 0.986 & 0.811 & 0.748 & 0.921 & 0.935 & 0.903 & 0.856 & 0.831 & 0.659 & 0.841 \\ \hline KNeighborsDist & 0.501 & 0.873 & 0.722 & 0.517 & 0.741 & 0.868 & 0.684 & 0.808 & 0.755 & 0.576 & 0.705 \\ \hline KNeighborsUnif & 0.489 & 0.847 & 0.712 & 0.516 & 0.734 & 0.865 & 0.669 & 0.790 & 0.764 & 0.578 & 0.696 \\ \hline LightGBM & 0.751 & 0.989 & 0.807 & 0.803 & 0.911 & 0.923 & 0.930 & 0.860 & 0.853 & 0.683 & 0.851 \\ \hline XGBoost & 0.761 & 0.989 & 0.781 & 0.802 & 0.903 & 0.915 & 0.931 & 0.864 & 0.854 & 0.681 & 0.848 \\ \hline CatBoost & 0.788 & 0.987 & 0.838 & 0.818 & 0.914 & 0.931 & 0.930 & 0.858 & 0.856 & 0.686 & 0.860 \\ \hline MLP & 0.705 & 0.980 & 0.745 & 0.709 & 0.913 & 0.932 & 0.910 & 0.818 & 0.841 & 0.647 & 0.820 \\ \hline TabNet & 0.472 & 0.978 & 0.422 & 0.718 & 0.625 & 0.677 & 0.917 & 0.701 & 0.830 & 0.603 & 0.694 \\ \hline TabTransformer & 0.764 & 0.980 & 0.729 & 0.763 & 0.884 & 0.913 & 0.907 & 0.809 & 0.841 & 0.638 & 0.823 \\ \hline SAINT & 0.790 & 0.991 & 0.843 & 0.808 & 0.919 & 0.937 & 0.921 & 0.853 & 0.857 & 0.676 & 0.859 \\ \hline LEURN & 0.772 & 0.985 & 0.817 & 0.810 & 0.915 & 0.930 & 0.912 & 0.858 & 0.848 & 0.649 & 0.850 \\ \hline \end{tabular} \end{table} Table 3: Comparison to state-of-the-art in Binary Classification Datasets For all trainings, we perform automatic hyperparameter selection as follows. LEURN has three hyperparameters: depth (\(d\)), tanh quantized region number (\(k\)) and dropout rate (\(r\)). We define search intervals \(d\in\{0,1,2,5,10\}\), \(k\in\{1,2,5,10\}\), \(r\in\{0,0.1,0.3,0.5,0.7,0.9\}\). During hyperparameter search, first we set \(k=1,r=0.9\) (most regularized case) and find best depth. We sweep \(d\) from smallest to largest and stop search when performance metric becomes worse. Next, with \(d=d_{best}\) and \(r=0.9\) set, we sweep \(k\) from smallest to largest and stop when there is no improvement. Finally, we set \(d=d_{best}\) and \(k=k_{best}\), and sweep \(r\) from largest to smallest and stop when there is no improvement. In the above process, the main idea is to start from most regularized case, and check performance improvement when regularization is softened in a controllable way. The search order of \(d,k,r\) is emprical. When performance metric is checked in each stage, we perform 5 trainings with random training and validation sets where best performance is found via best validation error. Once hyperparameter search is complete, we perform 20 trainings with selected hyperparameters on random training, validation and test splits. We save best performing model on validation data, and report test performance averaged over 20 trainings. The split ratios follow [45]. As one can observe, LEURN is comparable and sometimes favorable to state-of-the-art methods, while having explainability advantages. ## 5 Conclusion We have introduced LEURN: Learning explainable univariate rules with neural networks. We have shown that LEURN makes human explainable decisions by its special design that results into learning rules with additive contributions. Several other advantages of LEURN was highlighted and tested on a toy dataset. LEURN was tested on 30 public tabular datasets, and it was found comparable to state-of-the-art methods.
2301.09689
Graph Neural Networks for Decentralized Multi-Agent Perimeter Defense
In this work, we study the problem of decentralized multi-agent perimeter defense that asks for computing actions for defenders with local perceptions and communications to maximize the capture of intruders. One major challenge for practical implementations is to make perimeter defense strategies scalable for large-scale problem instances. To this end, we leverage graph neural networks (GNNs) to develop an imitation learning framework that learns a mapping from defenders' local perceptions and their communication graph to their actions. The proposed GNN-based learning network is trained by imitating a centralized expert algorithm such that the learned actions are close to that generated by the expert algorithm. We demonstrate that our proposed network performs closer to the expert algorithm and is superior to other baseline algorithms by capturing more intruders. Our GNN-based network is trained at a small scale and can be generalized to large-scale cases. We run perimeter defense games in scenarios with different team sizes and configurations to demonstrate the performance of the learned network.
Elijah S. Lee, Lifeng Zhou, Alejandro Ribeiro, Vijay Kumar
2023-01-23T19:35:59Z
http://arxiv.org/abs/2301.09689v1
# Graph Neural Networks for Decentralized Multi-Agent Perimeter Defense ###### Abstract In this work, we study the problem of decentralized multi-agent perimeter defense that asks for computing actions for defenders with local perceptions and communications to maximize the capture of intruders. One major challenge for practical implementations is to make perimeter defense strategies scalable for large-scale problem instances. To this end, we leverage graph neural networks (GNNs) to develop an imitation learning framework that learns a mapping from defenders' local perceptions and their communication graph to their actions. The proposed GNN-based learning network is trained by imitating a centralized expert algorithm such that the learned actions are close to that generated by the expert algorithm. We demonstrate that our proposed network performs closer to the expert algorithm and is superior to other baseline algorithms by capturing more intruders. Our GNN-based network is trained at a small scale and can be generalized to large-scale cases. We run perimeter defense games in scenarios with different team sizes and configurations to demonstrate the performance of the learned network. graph neural networks, perimeter defense, multi-agent systems, perception-action-communication loops, imitation learning ## 1 Introduction The problem of perimeter defense games considers a scenario where the defenders are constrained to move along a perimeter and try to capture the intruders while the intruders aim to reach the perimeter without being captured by the defenders (Shishika and Kumar, 2020). A number of previous works have solved this problem with engagements on a planar game space (Shishika and Kumar, 2018; Chen et al., 2021). However, in the real world, the perimeter may be represented by a three-dimensional shape as the players (e.g., defenders and intruders) may have the ability to perform three-dimensional motions. For example, a perimeter of a building that defenders aim to protect can be enclosed by a hemisphere. As a result, the defender robots should be able to move in three-dimensional space. For example, aerial robots have been well studied in various settings (Chen et al., 2020; Nguyen et al., 2019; Lee et al., 2016, 2020a), and all these settings can be real-world use-cases for perimeter defense. For instance, intruders try to attack a military base in the forest and defenders aim to capture the intruders. In this work, we tackle the perimeter defense problem in a domain where multiple agents collaborate to accomplish a task. Multi-agent collaboration has been explored in many areas including environmental mapping (Liu et al., 2022; Thrun et al., 2000), search and rescue (Miller et al., 2020; Baxter et al., 2007), target tracking (Ge et al., 2022; Lee et al., 2022b), on-demand wireless infrastructure (Mox et al., 2020), transportation (Xu et al., 2022; Ng et al., 2022), and multi-agent learning (Kim et al., 2021). Our approach employs a team of robots that work collectively towards a common goal of defending a perimeter. We focus on developing decentralized strategies for a team of defenders for various reasons: (i) the teammates can be dynamically added or removed without disrupting explicit hierarchy; (ii) the centralized system may fail to cope with the high dimensionality of a team's joint state space; and (iii) the defenders have a limited communication range and can only communicate locally. To this end, we aim to develop a framework where a team of defenders collaborates to defend the perimeter using decentralized strategies based on local perceptions and communications. Specifically, we explore learning-based approaches to learn policies by imitating expert algorithms such as the maximum matching algorithm (Chen et al., 2014). Maximum matching algorithm that runs the exhaustive search to find the best policy is very computationally intensive at large scales since this approach is combinatorial in nature and assumes global information. We utilize GNN as the learning paradigm and demonstrate that the trained network can perform close to the expert algorithm. GNNs have decentralized communication architecture that capture the neighboring interactions and transferability that allows for generalization to previously unseen scenarios (Ruiz et al., 2021). We demonstrate that our proposed GNN-based network can be generalized to large scales in solving multi-robot perimeter defense games. With this insight, we make the following primary contributions in this paper: **Framework for decentralized perimeter defense using graph neural networks.** We propose a novel learning framework that utilizes a graph-based representation for the perimeter defense game. To the best of our knowledge, we are the first to solve the decentralized hemisphere perimeter defense problem by learning decentralized strategies via graph neural networks. **Robust perimeter defense performance with scalability.** We demonstrate that our methods perform close to an expert policy (i.e., maximum matching algorithm Chen et al. (2014)) and are superior to other baseline algorithms. Our proposed networks are trained at a small scale and can be generalized to large scales. ## 2 Related Work ### _Perimeter Defense_ In a perimeter defense game, defenders aim to capture intruders by moving along a perimeter while intruders try to reach the perimeter without being captured by the defenders. We refer to (Shishika and Kumar, 2020) for a detailed survey. Many previous works dealt with engagements on a planar game space (Shishika and Kumar, 2018; Macharet et al., 2020; Chen et al., 2021; Bajaj et al., 2021; Hsu et al., 2022). For example, a cooperative multiplayer perimeter-defense game was solved on a planar game space in (Shishika and Kumar, 2018). In addition, an adaptive partitioning strategy based on intruder arrival estimation was proposed in (Macharet et al., 2020). Later, a formulation of the perimeter defense problem as an instance of the flow networks was proposed in (Chen et al., 2021). Further, an engagement on a conical environment was discussed in (Bajaj et al., 2021), and a model with heterogeneous teams was addressed in (Hsu et al., 2022). High-dimensional extensions of the perimeter defense problem have been recently explored in (Lee and Bakolas, 2021; Yan et al., 2022; Lee et al., 2020b, 2021, 2022a). For example, Lee and Bakolas (2021) analyzed the two-player differential game of guarding a closed convex target set from an attacker in high-dimensional Euclidean spaces. Yan et al. (2022) studied a 3D multiplayer reach-avoid game where multiple pursuers defend a goal region against multiple evaders. Lee et al. (2020, 2021, 2022a) considered a game played between aerial defender and ground intruder. All of the aforementioned works focus on solving centralized perimeter defense problems, which assume that players have global knowledge of other players' states. However, decentralized control becomes a necessity as we reach a large number of players. To remedy this problem, Velhal et al. (2022) formulated the perimeter defense game into a decentralized multi-robot spatio-temporal multitask assignment problem on the perimeter of a convex shape. Paulos et al. (2019) proposed neural network architecture for training decentralized agent policies on the perimeter of a unit circle, where defenders have simple binary action spaces. Different from the aforementioned works, we focus on the high-dimensional perimeter, specialized to a hemisphere, with continuous action space. We solve multi-agent perimeter defense problems by learning decentralized strategies with graph neural networks. ### Graph Neural Networks We leverage graph neural networks as the learning paradigm because of their desirable properties of decentralized architecture that captures the interactions between neighboring agents and transferability that allows for generalization to previously unseen cases (Gama et al., 2019; Ruiz et al., 2021). In addition, GNNs have shown great success in various multi-robot problems such as formation control (Tolstaya et al., 2019), path planning (Li et al., 2021), task allocation (Wang and Gombolay, 2020), and multi-target tracking (Zhou et al., 2021; Sharma et al., 2022). Particularly, Tolstaya et al. (2019) utilized a GNN to learn a decentralized flocking behavior for a swarm of mobile robots by imitating a centralized flocking controller with global information. Later, Li et al. (2021) implemented GNNs to find collision-free paths for multiple robots from start positions to goal positions in obstacle-rich environments. They demonstrated that their decentralized path planner achieves a near-expert performance with local observations and neighboring communication only, which can also be generalized to larger networks of robots. The GNN-based approach was also employed to learn solutions to the combinatorial optimization problems in a multi-robot task scheduling scenario (Wang and Gombolay, 2020) and multi-target tracking scenario (Zhou et al., 2021; Sharma et al., 2022). ## 3 Problem Formulation ### Motivation Perimeter defense is a relatively new field of research that has been explored recently. One particular challenge is that the high-dimensional perimeters add spatial and algorithmic complexities for defenders to execute their optimal strategies. Although many previous works considered engagements on a planar game space and derived optimal strategies in 2D motions, the extension towards high-dimensional spaces is unavoidable for practical applications of perimeter defense games in real-world scenarios. For instance, a perimeter of a building that defenders aim to protect can be enclosed by a generic shape, such as a hemisphere. Since defenders cannot pass through the building and are assumed to be close to the building at any time, they are employed to move along the surface of the dome, which leads to the "hemisphere perimeter defense game." The intruder is moving on the base plane of the hemisphere, which implies a constant altitude during moving. The movement of the intruder is constrained to 2D since it is assumed that intruders may want to stay low in altitude to hide from the defenders in the real world. It is worth noting that the hemisphere defense problem is more challenging to solve than a problem where both agents are allowed to freely move in a 3D space. There were previous works in which both defenders and intruders could move in 3-dimensional spaces (Yan et al., 2022, 2019, 2020). In all cases, the authors were able to explicitly derive the optimal solutions even in multi-robot scenarios. Although our problem limits the dynamics of the defenders to the surface of the hemisphere, these constraints make the finding of an optimal solution intractable and challenging. ### Hemisphere Perimeter Defense We consider a hemispherical dome with radius of \(R\) as perimeter. The hemisphere constraint is for the defender to safely move around the perimeter (e.g. building). In this game, consider two sets of players: \(\textbf{D}=\{D_{i}\}_{i=1}^{N}\) denoting \(N\) defenders, and \(\textbf{A}=\{A_{j}\}_{j=1}^{N}\) denoting \(N\) intruders. A defender \(D_{i}\) is constrained to move on the surface of the dome while an intruder \(A_{j}\) is constrained to move on the ground plane. We will drop the indices \(i\) and \(j\) when they are irrelevant. An instance of 10 vs. 10 perimeter defense is shown in Figure 1. The positions of the players in spherical coordinates are: \(\textbf{z}_{D}=[\psi_{D},\phi_{D},R]\) and \(\textbf{z}_{A}=[\psi_{A},0,r]\), where \(\psi\) and \(\phi\) are the azimuth and elevation angles, which gives the relative position as: \(\textbf{z}\triangleq[\psi,\phi,r]\), where \(\psi\triangleq\psi_{A}-\psi_{D}\) and \(\phi\triangleq\phi_{D}\). The positions of the players can also be described in Cartesian coordinates as: \(\textbf{x}_{D}\) and \(\textbf{x}_{A}\). All agents move at unit speed, defenders capture intruders by closing within a small distance \(\epsilon\), and both defender and intruder are consumed during capture. An intruder wins if it reaches the perimeter (i.e., \(r(t_{f})=R\)) at time \(t_{f}\) without being captured by any defenders (i.e., \(||\textbf{x}_{A_{i}}(t)-\textbf{x}_{D_{j}}(t)||>\epsilon,\forall D_{j}\in \textbf{D},\forall t<t_{f}\)). A defender wins by capturing an intruder or preventing it from scoring indefinitely (i.e., \(\phi(t)=\psi(t)=0\), \(r(t)>R\)). The main interest of this work is to maximize the number of captures by defenders, given a set of initial configurations. ### Optimal Breaching Point Given \(\textbf{z}_{D}\), \(\textbf{z}_{A}\), we call _breaching point_ as a point on the perimeter at which the intruder tries to reach the target, as shown \(B\) in Figure 2. We call the azimuth angle that forms the breaching point as _breaching angle_, denoted by \(\theta\), and call the angle between \((\textbf{z}_{A}-\textbf{z}_{B})\) and the tangent line at \(B\) as _approach angle_, Figure 1: Instance of 10 vs. 10 perimeter defense. Defenders are constrained to move on the surface of the dome while intruders are constrained to move on the ground plan. denoted by \(\beta\). It is proved in (Lee et al., 2020b) that given the current positions of defender \(\mathbf{z}_{D}\) and intruder \(\mathbf{z}_{A}\) as point particles, there exists a unique breaching point such that the optimal strategy for both defender and intruder is to move towards it, known as _optimal breaching point_. The breaching angle and approach angle corresponding to the optimal breaching point are known as _optimal breaching angle_, denoted by \(\theta^{*}\), and _optimal approach angle_, denoted by \(\beta^{*}\). As stated in (Lee et al., 2020b), although there exists no closed-form solution for \(\theta^{*}\) and \(\beta^{*}\), they can be computed at any time by solving two governing equations: \[\beta^{*}=\cos^{-1}\left(\nu\frac{\cos\phi_{D}\sin\theta^{*}}{\sqrt{1-\cos^{2} \phi_{D}\cos^{2}\theta^{*}}}\right) \tag{1}\] and \[\theta^{*}=\psi-\beta^{*}+\cos^{-1}\left(\frac{\cos\beta^{*}}{r}\right) \tag{2}\] ### Target Time and Payoff Function We call the _target time_ as the time to reach \(B\) and define \(\tau_{D}(\mathbf{z}_{D},\mathbf{z}_{B})\) as the _defender target time_, \(\tau_{A}(\mathbf{z}_{A},\mathbf{z}_{B})\) as the _intruder target time_, and the following as _payoff_ function: \[p(\mathbf{z}_{D},\mathbf{z}_{A},\mathbf{z}_{B})=\tau_{D}(\mathbf{z}_{D}, \mathbf{z}_{B})-\tau_{A}(\mathbf{z}_{A},\mathbf{z}_{B}) \tag{3}\] The defender reaches \(B\) faster if \(p<0\) and the intruder reaches \(B\) faster if \(p>0\). Thus, the defender aims to minimize \(p\) while the intruder aims to maximize it. ### Optimal Strategies and Nash Equilibrium It is proven in (Lee et al., 2020b) that the optimal strategies for both defender and intruder are to move towards the optimal breaching point at their maximum speed at any time. Let \(\Omega\) and \(\Gamma\) be the continuous \(v_{D}\) and \(v_{A}\) that lead to \(B\) so that \(\tau_{D}(\mathbf{z}_{D},\Omega)\triangleq\tau_{D}(\mathbf{z}_{D},\mathbf{z}_{ B})\) and \(\tau_{A}(\mathbf{z}_{A},\Gamma)\triangleq\tau_{A}(\mathbf{z}_{A},\mathbf{z}_{B})\), and let \(\Omega^{*}\) and \(\Gamma^{*}\) be the optimal strategies that minimize \(\tau_{D}(\mathbf{z}_{D},\Omega)\) and \(\tau_{A}(\mathbf{z}_{A},\Gamma)\), respectively, then the optimality in Figure 2: Coordinates and relevant variables in the 1 vs. 1 hemisphere defense game. the game is given as a Nash equilibrium: \[p(\mathbf{z}_{D},\mathbf{z}_{A},\Omega^{*},\Gamma)\leq p(\mathbf{z}_{D},\mathbf{ z}_{A},\Omega^{*},\Gamma^{*})\leq p(\mathbf{z}_{D},\mathbf{z}_{A},\Omega,\Gamma^{*}) \tag{4}\] ### Problem Definition To maximize the number of captures during \(N\) vs. \(N\) defense, we first recall the dynamics of a 1 vs. 1 perimeter defense game. It is proven in (Lee et al., 2020b) that the best action for the defender in one-on-one game is to move towards the _optimal breaching point_ (defined in Section 3.3). The defender reaches the optimal breaching point faster than the intruder does if _payoff_\(p\) (defined in Section 3.4) is negative, and the intruder reaches faster if \(p>0\). From this, we infer that maximizing the number of captures in \(N\) vs. \(N\) defense is the same as finding a matching between the defenders and intruders so that the number of the negative payoff of assigned pairs is maximized. In an optimal matching, the number of negative payoffs stays the same throughout the overall game since the optimality in each game of defender-intruder pairs is given as a _Nash equilibrium_ (see Section 3.5). The expert assignment policy is a _maximum matching_(Shishika and Kumar, 2018; Chen et al., 2014). To execute this algorithm, we generate a bipartite graph with **D** and **A** as two sets of nodes (i.e., \(\mathcal{V}=\{1,2,..,N\}\)), and define the potential assignments between defenders and intruders as the edges. For each defender/node \(D_{i}\) in **D**, we find all the intruders/nodes \(A_{j}\) in **A** that are sensible by the defender and compute the corresponding payoffs \(p_{ij}\) for all the pairs. We say that \(D_{i}\) is _strongly assigned_ to \(A_{j}\) if \(p_{ij}<0\). Using the edge set \(\mathcal{E}\) given by maximum matching, we can maximize the number of strongly assigned pairs. For uniqueness, we choose a matching that minimizes the _value of the game_, which is defined as \[V=\sum_{(D_{i},A_{j})\in\mathcal{E}^{*}}p_{ij}, \tag{5}\] where \(\mathcal{E}^{*}\) is the subset of \(\mathcal{E}\) with negative payoff (i.e. \(\mathcal{E}^{*}=\{(D_{i},A_{j})\in\mathcal{E}\mid p_{ij}<0\}\)). This unique assignment ensures that the number of captures is maximized at the earliest possible. However, running the exhaustive search using maximum matching algorithm can be very expensive as the team size increases. This method is combinatorial in nature and assumes centralized information with full observability. Instead, we aim to find decentralized strategies that uses local perceptions \(\{\mathcal{Z}_{i}\}_{i\in\mathcal{V}}\) (see Section 4.1). To this end, we formalize the main problem of this paper as follows. [Decentralized Perimeter Defense with Graph Neural Networks] Design a GNN-based learning framework to learn a mapping \(\mathcal{M}\) from the defenders' local perceptions \(\{\mathcal{Z}_{i}\}_{i\in\mathcal{V}}\) and their communication graph \(\mathcal{G}\) to their actions \(\mathcal{U}\), i.e., \(\mathcal{U}=\mathcal{M}(\{\mathcal{Z}_{i}\}_{i\in\mathcal{V}},\mathcal{G})\), such that \(\mathcal{U}\) is as close as possible to action set \(\mathcal{U}^{\mathcal{G}}\) selected by a centralized expert algorithm. We describe in detail our learning architecture for solving Problem 1 in the following section. ## 4 Method In this paper, we learn decentralized strategies for perimeter defense using graph neural networks. Inference of our approach is in real-time, which is scalable to a large number of agents. We use an expert assignment policy to train a team of defenders who share information through communication channels. In Section 4.1, we introduce the perception module for processing the features that are input to GNN. Learning the decentralized algorithm through GNN and planning the candidate matching for the defenders are discussed in Section 4.2. The control of the defender team is explained in Section 4.3, and the training procedure is detailed in Section 4.4. The overall framework is shown in Figure 3. For the choice of architecture, we decouple the control module from the learning framework since directly learning the actions is unnecessary. Learning an assignment between agents is sufficient, and the best actions can be computed by the optimal strategies (Section 3.5). ### Perception In this section, we assume \(N\) aerial defenders and \(N\) ground intruders. Each defender \(D_{i}\) is equipped with a sensor and faces outwards the perimeter with a field of view \(FOV\). The defenders' horizontal field of view \(FOV\) is chosen as \(\pi\) assuming a fisheye-type camera. #### 4.1.1 Intruder features For each \(i\), a defender observes the set of intruders \(A_{j}\), and the relative positions in spherical coordinates between \(D_{i}\) and \(A_{j}\) are represented by \(\mathcal{Z}_{i}^{A}=\left\{\mathbf{z}_{ij}^{A}\right\}_{j\in N_{A}^{f}}\) where \(N_{A}^{f}\) is the number of intruder features. The number of input features \(N_{A}^{f}\) and \(N_{D}^{f}\) are selected as the fixed number of closest detected and neighboring agents, respectively. Although a defender can detect any number of intruders within the sensing range, a fixed number of detections is selected so that the system is scalable. In a decentralized setting, a defender should be able to decide its action based on its local perceptions. We experimentally chose the fixed number as 10 since an expert algorithm (i.e., the maximum matching) would always assign a defender to a robot among the 10 closest intruders. #### 4.1.2 Defender features To make the system scalable, we build communication with a fixed number of closest defenders. Each defender \(D_{i}\) communicates with nearby defenders \(D_{j}\) within its communication range \(r_{c}\). For each \(i\), the relative positions between \(D_{i}\) and \(D_{j}\) are represented by \(\mathcal{Z}_{i}^{D}=\left\{\mathbf{z}_{ij}^{D}\right\}_{j\in N_{D}^{f}}\) where \(N_{D}^{f}\) is the number of defender features. The selected number was 3 since communicating with many other robots would allow every defender to have full information of the environment (i.e., centralized) and 3 is the minimum number that the robots can collect information in every direction if we assume robots are scattered. If there are fewer than 10 detected intruders or 3 neighboring defenders, we hand over dummy values to fill up the Figure 3: Overall framework. Perception module collects local information. Learning & Planning module processes the collected information using GNN through \(K\)-hop neighboring communications. Control module computes the optimal strategies and executes the controller to close the loop. perception input matrix. It is important to keep the input features constant since neural networks cannot handle varying feature sizes. #### 4.1.3 Feature extraction Feature extraction is performed by concatenating the relative positions of observed intruders and communicated defenders, forming the local perceptions \(\mathcal{Z}_{i}=\{\mathcal{Z}_{i}^{A},\mathcal{Z}_{i}^{D}\}\). The extracted features are fed into a multi-layer perceptron (MLP) to generate the post-processed feature vector \(\mathbf{x}_{i}\), which will be exchanged among neighbors through communications. ### Learning & Planning We employ graph neural networks with \(K\)-hop communications. Defenders communicate their perceived features with neighboring robots. The communication graph \(\mathcal{G}\) is formed by connecting the nearby defenders within the communication range \(r_{c}\), and the resulted adjacency matrix \(\mathbf{S}\) is given to the graph neural networks. #### 4.2.1 Graph Shift Operation We consider each defender \(i,i\in\mathcal{V}\) has a feature vector \(\mathbf{x}_{i}\in\mathbb{R}^{F}\), indicating the post-processed information from \(D_{i}\). By collecting the feature vectors \(\mathbf{x}_{i}\) from all defenders, we have the feature matrix for the defender team \(\mathbf{D}\) as: \[\mathbf{X}=\begin{bmatrix}\mathbf{x}_{1}^{\mathsf{T}}\\ \vdots\\ \mathbf{x}_{N}^{\mathsf{T}}\end{bmatrix}=[\mathbf{x}^{1},\cdots,\mathbf{x}^{F }]\in\mathbb{R}^{N\times F}, \tag{6}\] where \(\mathbf{x}^{f}\in\mathbb{R}^{N},f\in[1,\cdots,F]\) is the collection of the feature \(f\) across all defenders; i.e., \(\mathbf{x}^{f}=[\mathbf{x}_{1}^{f},\cdots,\mathbf{x}_{N}^{f}]^{\mathsf{T}}\) with \(\mathbf{x}_{i}^{f}\) denoting the feature \(f\) of \(D_{i},i\in\mathcal{V}\). We conduct _graph shift operation_ for each \(D_{i}\) by a linear combination of its neighboring features, i.e., \(\sum_{j\in\mathcal{N}_{i}}\mathbf{x}_{j}\). Hence, for all defenders \(\mathbf{D}\) with graph \(\mathcal{G}\), the feature matrix \(\mathbf{X}\) after the shift operation becomes \(\mathbf{S}\mathbf{X}\) with: \[[\mathbf{S}\mathbf{X}]_{if}=\sum_{j=1}^{N}[\mathbf{S}]_{ij}[\mathbf{X}]_{j}^{ f}=\sum_{j\in\mathcal{N}_{i}}s_{ij}\mathbf{x}_{j}^{f}, \tag{7}\] Here, the adjacency matrix \(\mathbf{S}\) is called the _Graph Shift Operator_(GSO) (Gama et al., 2019). #### 4.2.2 Graph Convolution With the shift operation, we define the _graph convolution_ by a linear combination of the _shifted features_ on graph \(\mathcal{G}\) via \(K\)-hop communication exchanges (Gama et al., 2019; Li et al., 2020): \[\mathcal{H}(\mathbf{X};\mathbf{S})=\sum_{k=0}^{K}\mathbf{S}^{k}\mathbf{X} \mathbf{H}_{k}, \tag{8}\] where \(\mathbf{H}_{k}\in\mathbb{R}^{F\times G}\) represents the coefficients combining \(F\) features of the defenders in the shifted feature matrix \(\mathbf{S}^{k}\mathbf{X}\), with \(F\) and \(G\) denoting the input and output dimensions of the graph convolution. Note that, \(\mathbf{S}^{k}\mathbf{X}=\mathbf{S}(\mathbf{S}^{k-1}\mathbf{X})\) is computed by means of \(k\) communication exchanges with \(1\)-hop neighbors. #### 4.2.3 Graph Neural Network Applying a point-wise non-linearity \(\sigma:\mathbb{R}\rightarrow\mathbb{R}\) as the activation function to the graph convolution (Eq. 8), we define _graph perception_ as: \[\mathcal{H}(\mathbf{X};\mathbf{S})=\sigma(\sum_{k=0}^{K}\mathbf{S}^{k}\mathbf{X} \mathbf{H}_{k}). \tag{9}\] Then, we define a GNN module by cascading \(L\) layers of graph perceptions (Eq. 9): \[\mathbf{X}^{\ell}=\sigma\big{[}\mathcal{H}^{\ell}(\mathbf{X}^{\ell-1};\mathbf{ S})\big{]}\quad\text{for}\quad\ell=1,\cdots,L, \tag{10}\] where the output feature of the previous layer \(\ell-1\), \(\mathbf{X}^{\ell-1}\in\mathbb{R}^{N\times F^{\ell-1}}\), is taken as input to the current layer \(\ell\) to generate the output feature of layer \(l\), \(\mathbf{X}^{\ell}\). Recall that the input to the first layer is \(\mathbf{X}^{0}=\mathbf{X}\) (Eq. 6). The output feature of the last layer \(\mathbf{X}^{L}\in\mathbb{R}^{N\times G}\), obtained via \(K\)-hop communications, represents the exchanged and fused information of the defender team \(\mathbf{D}\). #### 4.2.4 Candidate matching The output of the GNN, which represents the fused information from the \(K\)-hop communications, is then processed with another MLP to provide a candidate matching for each defender. Figure 3 shows a candidate matching instance if \(N_{A}^{f}=6\). Given a defender \(D_{i}\), we find the \(N_{A}^{f}\) closest intruders and number them from 1 to \(N_{A}^{f}\) clockwise. The main reason for numbering the nearby intruders clockwise is to interpret the feature outputs from our networks in identifying which intruders would be matched with which defenders. We could number them counterclockwise or in any arbitrary order. Since each defender learns decentralized strategies, it needs to specify an intruder to capture given its local perception. There are no global IDs for the intruders so without loss of generality we simply assign the IDs clockwise. The output from the multi-layer perceptron is an assignment likelihood \(\mathcal{L}\), which presents the probabilities of \(N_{A}^{f}\) intruder candidates' likelihood to be matched with the given defender. For instance, an expert assignment likelihood \(L_{i}^{g}\) for \(D_{i}\) in Figure 3 would be \([0.01,0.01,0.95,0.01,0.01,0.01]\) if the third intruder (i.e., \(A_{3}\)) is matched with \(D_{i}\) by the expert policy (i.e., maximum matching). The planning module selects the intruder candidate \(A_{j}\) so that the matching pair \((D_{i},A_{j})\) would resemble the expert policy with the highest probability. It is worth noting that our approach renders a decentralized assignment policy given that only neighboring information is exchanged. #### 4.2.5 Permutation Equivalence It is worth noting that our proposed GNN-based learning approach is scalable due to permutation equivalence. This means that given a decentralized defender, it should be able to decide the action based on local perceptions that consist of an arbitrary number of unnumbered intruders. An instance of a perimeter defense game is illustrated to show this property in Figure 4. The plots focus on a single defender and intruders are gradually approaching the perimeter as time passes by. The same intruders are colored in the same color across different time stamps. Notice that a new light-blue intruder enters into the field of view of the defender at \(t=2\), and a purple intruder begins to appear at \(t=3\). Although an arbitrary number of intruders are detected at each time, our system gives IDs to intruders shown as blue numbers in Figure 4. We number them clockwise but could have done differently in any permutation (e.g., counterclockwise) because graph neural networks perform label-independent processing. The reason for the numbering is to specify which intruders would be matched with which defenders from the network outputs. Without loss of generality, we assign the IDs clockwise but we note that these IDs are arbitrary since the IDs can change at different stamps. For instance, the yellow intruder ID is 2 at \(t=1\) but becomes 3 at \(t=2,3\). Similarly, the red intruder ID is 3 at \(t=1\) but changes to 4 at \(t=2\) and 5 at \(t=3\). In this way, we accommodate an arbitrary amount of intruders and thus our system is permutation equivalent. ### Control The output from the Section 4.2 is inputted to the defender strategy module in Figure 3. This module handles all the matched pairs \((D_{i},A_{j})\) and computes the optimal breaching points for each of the one-on-one hemisphere perimeter defense games (see Section 3.3). The defender strategy module collectively outputs the position commands, which are towards the direction of the optimal breaching points. The SO(3) command (Mellinger and Kumar, 2011) that consists of thrust and moment to control the robot at a low level is then passed to the defender team \(\mathbf{D}\) for control. The state dynamics for the defender-intruder pair is detailed in (Lee et al., 2020). The defenders move based on the commands to close the perception-action loop. Notably, the expert assignment likelihood \(\mathcal{L}^{g}\) would result in the expert action set \(\mathcal{U}^{g}\) (defined in Problem 1). ### Training Procedure To train our proposed networks, we use imitation learning to mimic an expert policy given by maximum matching (explained in Section 3), which provides the optimal assignment likelihood \(\mathcal{L}^{g}\) (described in Section 4.2) given the defenders' local perceptions \(\{\mathcal{Z}_{i}\}_{i\in\mathcal{V}}\) and the communication graph \(\mathcal{G}\). The training set \(\mathcal{D}\) is generated as a collection of these data: \(\mathcal{D}=\{(\{\mathcal{Z}_{i}\}_{i\in\mathcal{V}},\mathcal{G},\mathcal{L}^ {g})\}\). We train the mapping \(\mathcal{M}\) (defined in Problem 1) to minimize the cross-entropy loss between \(\mathcal{L}^{g}\) and \(\mathcal{L}\). We show that the trained \(\mathcal{M}\) provides \(\mathcal{U}\) that is close to \(\mathcal{U}^{g}\). The number of learnable parameters in our networks is independent of the number of team sizes \(N\). Therefore, we can train our networks on a small scale and generalize our model to large scales, given that defenders at any scale learn decentralized strategies based on the local perception of fixed numbers of agents. #### 4.4.1 Model Architecture Our model architecture consists of a 2-layer MLP with 16 and 8 hidden layers to generate the post-processed feature vector \(\mathbf{x}_{i}\), a 2-layer GNN with 32 and 128 hidden layers to exchange the collected information from defenders, and a single-layer MLP to produce an assignment likelihood \(\mathcal{L}\). The layers in MLP and GNN are followed by ReLU. Figure 4: Instance of perimeter defense game at different time stamps. The plots focus on a single defender and its local perceptions. #### 4.4.2 Graph Neural Networks Details In implementing graph neural networks, we construct a 1-hop connectivity graph by connecting defenders within communication range \(r_{c}=1\). Given that the default radius is \(R=1\), we foresee that three neighboring agents within 1-hop would provide a wide sensing region for the defenders. Accordingly, we assume that communications occur in real-time with \(N_{D}^{f}=3\). Each defender gathers information as input features that consist of \(N_{A}^{f}=10\) closest intruder positions and \(N_{D}^{f}=3\) closest defender positions. The used parameters are summarized in Table 1. #### 4.4.3 Implementation Details The experiments are conducted using a 12-core 3.50GHz i9-9920X CPU and an Nvidia GeForce RTX 2080 Ti GPU. We implement the proposed networks using PyTorch v1.10.1 (Paszke et al., 2019) accelerated with Cuda v10.2 APIs. We use the Adam optimizer with a momentum of 0.5. The learning rate is scheduled to decay from \(5\times 10^{-3}\) to \(10^{-6}\) within 1500 epochs with batch size 64, using cosine annealing. We choose these hyperparameters for the best performance. ## 5 Experiments ### Datasets We evaluate our decentralized networks using imitation learning where the expert assignment policy is the maximum matching. The perimeter is a hemisphere with a radius \(R\), which is defined by \(R=\sqrt{N/N_{def}}\) where \(N\) is team size and \(N_{def}\) is a default team size. Since running the maximum matching is very expensive at large scales (e.g. \(N>10\)), we set the default team size \(N_{def}=10\). In this way, \(R\) also represents the scale of the game; for instance when \(N=40\), \(R\) becomes 2, which indicates that the scale of the problem's setting is doubled compared to the setting when \(R=1\). Given the team size \(N=10\), our experimental arena has a dimension of \(10\times 10\times 1\) m. In offline, we randomly sample 10 million examples of defender's local perception \(\mathcal{Z}_{i}\) and find corresponding \(\mathcal{G}\) and \(\mathcal{L}^{g}\) to prepare the dataset, which is divided into a training set (60%), a validation set (20%), and a testing set (20%). ### Metrics We are mainly interested in the percentage of intruders caught (i.e., number of captures/total number of intruders). At small scales (e.g. \(N\leq 10\)), an expert policy (i.e., the maximum matching) can be run and a direct comparison between the expert policy and our policy is available. At large scales (e.g. \(N>10\)), the maximum matching is too expensive to run. Thus we compare our algorithm with other baseline approaches: _greedy_, _random_, and _mlp_, which will be explained in Section 5.3. To observe the scalability on small and large scales, we run a total of five different algorithms for each scale: _expert_, _gnn_, _greedy_, \begin{table} \begin{tabular}{c|c c} \hline Parameter name & Symbol & Value \\ \hline Capturing distance & \(\epsilon\) & 0.02 \\ Field of view & \(FOV\) & \(\pi\) \\ Number of intruder features & \(N_{A}^{f}\) & 10 \\ Number of defender features & \(N_{D}^{f}\) & 3 \\ Communication range & \(r_{c}\) & 1 \\ Default team size & \(N_{def}\) & 10 \\ \hline \end{tabular} \end{table} Table 1: Parameter setup in implementing graph neural networks, random_, and _mlp_. In all cases, we compute the _absolute accuracy_, which is defined by the number of captures divided by the team size, to verify if our network can be generalized to any team size. Furthermore, we also calculate the _comparative accuracy_, defined as the ratio of the number of captures by _gnn_ to the number of captures by another algorithm, to observe comparative results. ### Compared Algorithms In baseline algorithms, defenders do not communicate their "intentions" of which intruders would be captured by which neighboring defenders for a fair comparison since GNN does not share such information either. For the GNN framework, each defender perceives nearby intruders, and the relative positions of perceived intruders, not the "intentions," are shared by GNN through communications. The power of the GNNs is to learn these "intentions" implicitly via K-hop communications. That way, the decentralized decision-making (i.e., for both GNN and baselines) may allow multiple defenders to aim to capture the same intruder while the centralized planner knows the "intentions" of all the defenders and would avoid such a scenario. #### 5.3.1 Greedy The greedy algorithm can be run in polynomial time and thus becomes a good candidate algorithm to be compared with our approach using GNN. For a fair comparison, we run a decentralized greedy algorithm based on local perception \(\mathcal{Z}_{i}\) of \(D_{i}\). We enable \(K\)-hop neighboring communications so that the sensible region of a defender is expanded as if the networking channels of GNN are active. The defender \(D_{i}\) computes the payoff \(p_{ij}\) (see Section 3.4) based on any sensible intruder \(A_{j}\) and greedily chooses an assignment that minimizes the payoff \(p_{ij}\). #### 5.3.2 Random The random algorithm is similar to the greedy algorithm in that the \(K\)-hop neighboring communications are enabled for the expanded perception. Among sensible intruders, a defender \(D_{i}\) randomly picks an intruder to determine the assignment. #### 5.3.3 Mlp For the MLP algorithm, we only train the current MLP of our proposed framework in isolation by excluding the GNN module. By comparing our GNN framework to this algorithm, we can observe if the GNN gives any improvement. ### Results We run the perimeter defense game in various scenarios with different team sizes and initial configurations to evaluate the performance of the learned networks. In particular, we conduct the experiments at small (\(N\leq 10\)) and large (\(N>10\)) scales. The snapshots of the simulated perimeter defense game in top view with our proposed networks for different team sizes are shown in Figure 5. The perimeter, defender state, intruder state, and breaching point are marked in green, blue, red, and yellow, respectively. We observe that intruders try to reach the perimeter. Given the defender-intruder matches, the intruders execute their respective optimal strategies to move towards the optimal breaching points (see Section 3.5). If an intruder successfully reaches it without being captured by any defender, the intruder is consumed and leaves a marker called "Intrusion". If an intruder fails and is intercepted by a defender, both agents are consumed and leave a marker called "Capture". The points on the perimeter aimed by intruders are marked as "Breaching point". In all runs, the game ends at _terminal time_\(T_{f}\) when all the intruders are consumed. See the supplemental video for more results. As mentioned in Section 5.1, we run the five algorithms _expert_, _gnn_, _greedy_, _random_, and _mlp_ at small scales, and run _gnn_, _greedy_, _random_, and _mlp_ in large scales. As an instance, the snapshots of simulated 20 vs. 20 perimeter defense game in top view at terminal time \(T_{f}\) using the four algorithms are displayed in Figure 6. The four subfigures (a)-(d) show that these algorithms exhibit different performance although the game begins with the same initial configuration in all cases. The number of captures by these algorithms _gnn_, _greedy_, _random_, and _mlp_ are 12, 11, 10, 7, respectively. The overall results of the percentage of intruders caught by each of these methods are depicted in Figure 7. It is observed that _gnn_ outperforms other baselines in all cases, and performs close to _expert_ at the small scales. In particular, given that our default team size \(N_{def}\) is 10, the performance of our proposed algorithm stays competitive with that of the expert policy near \(N=10\). At large scales, the percentage of captures by _gnn_ stays constant, which indicates that the trained network can be well generalized to the large scales even if the training has been performed at the small scale. The percentage of captures by _greedy_ also seems constant but performs much worse than _gnn_ as the team size gets large. At small scales, only a few combinations are available in matching defender-intruder pairs and thus the _greedy_ algorithm would perform similarly to the expert algorithm. As the number of agents increases, the number of possible matching increases exponentially so the _greedy_ algorithm performs worse since the problem complexity gets much higher. The _random_ approach performs worse than all Figure 5: **(A)-(C) Snapshots of simulated perimeter defense in top view using the proposed method _gnn_ for three different team sizes.** other algorithms at small scales, but the _mlp_ begins to perform worse than the _random_ when the team size increases over 40. This tendency tells that the policy trained only with MLP cannot be scalable at large scales. Since the training is done with 10 agents, it is optimal near \(N=10\), but the _mlp_ cannot work at larger scales and even performs worse than the _random_ algorithm. It is confirmed that the GNN added to the MLP significantly improves the performance. Overall, compared to other algorithms, _gnn_ performs better at large scales than at small scales, which validates that GNN helps the network become scalable. To quantitatively evaluate the proposed method, we report the _absolute accuracy_ and _comparative accuracy_ (defined in Section 5.2) in Table 2 and Table 3. As expected, the absolute accuracy reaches the maximum when team size approaches \(N=10\). The overall values of the absolute accuracy are fairly consistent except when \(N=2\). We conjecture that there may not be much information shared by the two defenders and there could be no sensible intruders at all based on initial configurations. The comparative accuracy between _gnn_ and _expert_ shows that our trained policy gets much closer to the expert policy as \(N\) approaches 10, and we expect the performance of _gnn_ to be close to that of _expert_ even at the large scales. The comparative accuracy between _gnn_ and other baselines shows that our trained networks perform much better than baseline algorithms at the large scales (\(N\geq 40\)) with an average of 1.5 times more captures. The comparative accuracy between _gnn_ and _random_ is somewhat noisy throughout the team size due to the nature of randomness, but we observe that our policy can outperform random policy with an average of 1.8 times more captures at small and large scales. We observe that _mlp_ performs much worse than other algorithms at large scales. Based on the comparisons, we demonstrate that our proposed networks, which are trained at a small scale, can generalize to large scales. Intuitively, one may think that _greedy_ would perform the best in a decentralized setting since each defender does its best to minimize the _value of the game_ (defined in Eq. 5). However, we can infer that _greedy_ does not know the intentions of nearby defenders (e.g. which intruders to capture) so it cannot achieve the performance close to the centralized expert algorithm. Our method implements graph neural networks to exchange the information of nearby defenders, which perceive their local features, to plan the final actions of the defender team; therefore, implicit information of where the nearby defenders are likely to move is transmitted to each neighboring defender. Since the centralized expert policy knows all the intentions of defenders, our GNN-based policy learns the intention through communication channels. The collaboration among the defender team is the key for our _gnn_ to outperform _greedy_ approach. These results validate that the implemented GNNs are ideal for our problem with the properties of the decentralized communication that captures the neighboring interactions and transferability that allows for generalization to unseen scenarios. ### Further Analysis #### 5.5.1 Performance vs. Number of expert demonstrations To analyze the algorithm performance, we have trained our GNN-based architecture with a different number of expert demonstrations (e.g., 10 million, 1 million, 100k, and 10k). The percentage of intruders caught (average and standard deviation over 10 trials) on team size \(10\leq N\leq 50\) are shown in Figure 8. The plot validates that our proposed network learns better with more demonstrations. #### 5.5.2 Performance vs. Perimeter radius We have tested the GNN-based proposed method with different perimeter radii. Intuitively, given the fixed number of agents, increasing the radius may lead to a failure in the defense system. We set the default team size of defenders as 40 and increase the perimeter radius until the percentage of intruders caught converges to zero. As shown in Figure 9, the percentage decreases as the radius changes from 100m to 800m, converging to zero. #### 5.5.3 Performance vs. Number of intruders sensed The performance of our GNN-based approach with different numbers of intruder (e.g., \(N_{A}^{f}\)) sensed is shown in Figure 10. We have run the experiments with \(N_{A}^{f}\) as 1, 3, 5, and 10 since no ground truth expert policy is available to generate the training data for numbers larger than 10. We observe that the more intruder features are sensed, the better performances are shown. Further, the performance discrepancy tends to be smaller as the team size gets bigger. For some team size (e.g., 40), higher \(N_{A}^{f}\) performs much better, but this is expected based on the initial configuration of the game. For instance, if the initial configuration is very sparse, a defender will benefit from higher \(N_{A}^{f}\), and the percentage of intruders caught will be higher. Figure 8: Sample efficiency with different numbers of expert demonstrations. Figure 9: Percentage of intruders caught with various perimeter radii. ### Limitations As perimeter defense is a relatively new field of research, this work has underlying limiting assumptions. In the problem formulation, we assume the robots are point particles. Accordingly, we assume optimal trajectories obey first-order assumptions. There is a preliminary work (Lee et al., 2021) to bridge the gap between the point particle assumptions and three-dimensional robots for one-on-one hemisphere perimeter defense, and we hope to extend the idea of this work to our multi-agent perimeter defense problem in the future. Another limitation is that there is no available expert policy, which can be compared with our proposed method, at large scales. Running the maximum matching algorithm is very expensive at large scales, so we compare our GNN-based algorithm with other baseline methods. Although the consistent performances of tested algorithms along different scales confirm that our trained networks can be generalized to large scales, we hope to explore another algorithm that can be used as an expert policy at large scales to replace the maximum matching. One consideration is utilizing reinforcement learning since the algorithm performance at large scales will be available. ## 6 Conclusion This paper proposes a novel framework that employs graph neural networks to solve the decentralized multi-agent perimeter defense problem. Our learning framework takes the defenders' local perceptions and the communication graph as inputs and returns actions to maximize the number of captures for the defender team. We train deep networks supervised by an expert policy based on the maximum matching algorithm. To validate the proposed method, we run the perimeter defense game in different team sizes using five different algorithms: _expert_, _gnn_, _greedy_, _random_, and _mlp_. We demonstrate that our GNN-based policy stays closer to the expert policy at small scales and the trained networks can generalize to large scales. Figure 10: Percentage of intruders caught with different numbers of intruders sensed. One future work is to implement vision-based local sensing for the perception module, which would relax the assumptions of perfect state estimation. Realizing multi-agent perimeter defense with vision-based perception and communication within the defenders will be an end goal. Another future research direction is to find a centralized expert policy in multi-robot systems by utilizing reinforcement learning. ## Data Availability Statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ## Author Contributions EL, LZ, and VK contributed to conception and design of the study. EL and AR performed the statistical analysis. EL wrote the first draft of the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version. ## Acknowledgments We gratefully acknowledge the support from ARL DCIST CRA under Grant W911NF-17-2-0181, NSF under Grants CCR-2112665, CNS-1446592, and EEC-1941529, ONR under Grants N00014-20-1-2822 and N00014-20-S-B001, Qualcomm Research, NVIDIA, Lockheed Martin, and C-BRIC, a Semiconductor Research Corporation Joint University Microelectronics Program cosponsored by DARPA. ## Conflict of Interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
2310.01580
Active Learning on Neural Networks through Interactive Generation of Digit Patterns and Visual Representation
Artificial neural networks (ANNs) have been broadly utilized to analyze various data and solve different domain problems. However, neural networks (NNs) have been considered a black box operation for years because their underlying computation and meaning are hidden. Due to this nature, users often face difficulties in interpreting the underlying mechanism of the NNs and the benefits of using them. In this paper, to improve users' learning and understanding of NNs, an interactive learning system is designed to create digit patterns and recognize them in real time. To help users clearly understand the visual differences of digit patterns (i.e., 0 ~ 9) and their results with an NN, integrating visualization is considered to present all digit patterns in a two-dimensional display space with supporting multiple user interactions. An evaluation with multiple datasets is conducted to determine its usability for active learning. In addition, informal user testing is managed during a summer workshop by asking the workshop participants to use the system.
Dong H. Jeong, Jin-Hee Cho, Feng Chen, Audun Josang, Soo-Yeon Ji
2023-10-02T19:21:24Z
http://arxiv.org/abs/2310.01580v1
Active Learning on Neural Networks through Interactive Generation of Digit Patterns and Visual Representation ###### Abstract Artificial neural networks (ANNs) have been broadly utilized to analyze various data and solve different domain problems. However, neural networks (NNs) have been considered a black box operation for years because their underlying computation and meaning are hidden. Due to this nature, users often face difficulties in interpreting the underlying mechanism of the NNs and the benefits of using them. In this paper, to improve users' learning and understanding of NNs, an interactive learning system is designed to create digit patterns and recognize them in real time. To help users clearly understand the visual differences of digit patterns (i.e., \(0\sim 9\)) and their results with an NN, integrating visualization is considered to present all digit patterns in a two-dimensional display space with supporting multiple user interactions. An evaluation with multiple datasets is conducted to determine its usability for active learning. In addition, informal user testing is managed during a summer workshop by asking the workshop participants to use the system. First keyword Second keyword More ## 1 Introduction Artificial Intelligence (AI) has significant impacts in many disciplines, such as medicine [1], education [2], and earthquake [3], and so forth. AI enables machines to learn data to perform various tasks to function like humans [4]. Examples of AI-enabled technologies include abnormal behavior detection in banking transactions, home protection using camera-based monitoring and object recognition, customized learning mechanisms providing tailored lessons to students, and driving assistance with upgraded and autonomous driving technologies. AI also has received much attention as a tool in education to improve students' learning and knowledge acquisition [5]. Despite the powerful capability of AI and its numerous applications in our daily lives, AI has been known as a black box tool because it is not easy to understand the logic behind its internal computations [6]. This motivated researchers to design trustworthy and interpretable AI systems [7; 8]. However, it is still difficult to understand AI systems due to the well-known issue of transparency [9] because machine or deep learning algorithms used in AI models are highly complex. For instance, neural networks (NNs) or deep NNs (DNNs) often include thousands of artificial neurons to learn from and process large amounts of data. Because of the numerous neurons and their complex interconnections, it is difficult to determine how decisions are made [10]. Understanding how the AI models work and generate resulting predictions is a critical step in interpreting the meaning of AI's outcomes. But, it remains challenging to understand data using a predictive model that finds patterns from training the data and analyzes the difference between predictions and the patterns. This paper aims to integrate AI technologies into learning through hands-on practices. Specifically, designing an interactive learning system is considered to assist users in understanding NNs clearly through multiple learning activities. To support an interactive learning environment on NNs, we have designed the system with a pattern generator, where a user can generate digit patterns. In addition, providing a visual representation of the patterns is considered to help the user understand the differences among the generated patterns. Whenever the user creates patterns, the user can experience NN training and recognition in real time. To represent the patterns in a 2D display space, Principal Component Analysis (PCA) was used to reduce the dimensions of the patterns and present them in a lower-dimensional space [11, 12]. By interactively navigating the display space, the user can identify the similarities and differences between the patterns. To evaluate the system's effectiveness for active learning, 2400-digit patterns are generated and used to test the system. A broadly known handwritten digits dataset (called MNIST) is also used to determine the capability of supporting a real-time interactive digits analysis. To understand the usefulness of the system, informal user testing was organized during a summer workshop by asking the workshop participants to use the system. We found that they understood NNs well by initiating active learning with the system. The rest of this paper is structured in six sections. Section 2 provides previous studies on designing educational systems for understanding AI. In Section 3, the designed system is explained. Section 3 includes a detailed explanation about the applied neural networks and visual representation. Section 5 shows the conducted evaluation of interactive learning on recognizing digits in real time. After discussing interesting insights in Section 6, we conclude this paper by providing possible future work in Section 7. ## 2 Related Work Due to high interest in AI, most education institutes, including colleges or high schools, have introduced new AI degree programs [13]. AI has become a powerful paradigm in scientific research communities due to its diverse applications in broad and various domains [14]. Due to this popularity, many students have shown a strong interest in understanding AI. In particular, they have exposed their high interest in deep learning (DL) because it has been commonly used to detect complex patterns in high-dimensional data with little or no human interventions. However, understanding the underlying ideas of the output prediction in DL is not trivial due to the black-box nature of the AI models [15, 16, 17]. Li and et al. [18] explored various visualization techniques to understand the structure of neural loss functions and their effectiveness. Chatzimparmpas and et al. [19] emphasized how important information visualization is in understanding machine learning (ML) models and enhancing trust in ML. Although they highlighted the importance of utilizing visualization in ML, their primary considerations fell into addressing specific domain problems instead of helping students understand the internal computation of ML. Computer science education researchers have developed various tools to improve students' knowledge of AI technologies. Mariescu-Istodor and Jormanaainen [20] developed a web-based tool for high school students to enhance their knowledge in recognizing objects using ML. They designed the tool to identify objects using a camera and determine their object classes in real time based on training samples. In this tool, when a student gives a wrong answer, the student sees a question mark rather than a message saying the answer is wrong. If an object has been misclassified (i.e., the student says a wrong answer), the tool could fix such a mistake by correctly training and classifying its class name with additional samples. The authors aimed to design the tool to motivate students by improving their class engagement. You and Yin [15] developed a device (called Omega) to enhance college students' understanding on NNs by addressing the black box nature and representing their interactions during NN training steps. In particular, the device visually presented the weight changes in hidden layers during the NN training. Lamy and Tsopra [16] introduced a visual translation of simple NNs to prove the visual interpretation using rainbow boxes with adding interactive functionality. Kim and Shim [21] emphasized the need of providing AI education for non-engineering major students by creating a visual solution. Although numerous studies have been conducted to design practical approaches to improve students' learning, most studies mainly aimed to teach users NN training steps. Unlike the existing approaches explained above, our study differs in that the proposed interactive learning system enables users to create input data patterns and train NNs interactively. This will significantly increase the user's learning and knowledge gained on NNs because it supports real-time computation and recognition of the user-generated digit patterns. ## 3 System Design We hypothesized that supporting real-time interactive data generation, training, and recognition through NNs could increase users' understanding of the underlying idea of NNs. Based on this hypothesis, we have designed an active learning system (named Neural Network Trainer) to support the user in generating digit patterns and recognizing them with NNs. We also designed an additional system (called Neural Network Tester) to evaluate multiple user-generated patterns simultaneously. For supporting active learning in the system, integrating a graphical user interface (GUI) was considered to address the advancement of users' understanding of NNs through direct interactions with the system. The Neural Network Trainer system consists of two layouts - Digit Pattern Generator (Figure 0(a)-left) and Visual Analyzer (Figure 0(a)-right). Digit Pattern Generator includes a pattern grid (Figure 0(a)-(i)) with multiple control panels (Figure 0(a)-(ii) \(\sim\) 0(v)). The pattern grid allows the user to create digit patterns (i.e., 0 \(\sim\) 9) by clicking each cell in the pattern grid. It has 12 x 8 grid cells representing a digit pattern. Each cell holds binary information as 1 or 0. It shows the size of NNs, including nodes in input, hidden, and output layers (Figure 0(a)-(ii)). Two list boxes have been added to keep all created digit patterns and the total number of patterns representing each digit in Figure 0(a)-(iii) and 0(a)-(iv), respectively. Real-time training and testing of NNs are handled with the control buttons (Figure 0(a)-(v)). The result of the recognized digit pattern with NN appears with probability distributions (Figure 0(a)-(vi)). Visual Analyzer represents user-generated digit patterns in a 2D display space by applying PCA computation. The Neural Network Tester system supports evaluating multiple user-generated NNs with various testing datasets. The primary purpose of having the system was to help the workshop participants understand the performances of their generated NNs in recognizing digits competitively with others. Figure 0(b) demonstrates the evaluation of three NNs created by three groups of users. Similar to the digit pattern generator, it has a pattern grid (Figure 0(b)-(i)) with multiple control panels (Figure 0(b)-(ii) \(\sim\) 0(b)). A list box (Figure 0(b)-(iii)) shows the loaded user-generated NNs. With a testing dataset, it evaluates the NNs showing overall accuracies (Figure 0(b)-(iii)) and probability estimation (Figure 0(b)-(iv)). The probability estimation indicates how each pattern is recognized with each NN. If a digit is recognized correctly, a reddish bar graph is represented. If not, a bluish bar graph is displayed to denote incorrect recognition. ## 4 Design of Neural Networks To support real-time digit pattern generation and recognition, a three-layered NN based on the backpropagation method [22] was used. It feeds error rates back to NNs to optimize weights with optimal values. The input layer has 96 nodes to be matched to the cells of each digit pattern. The output layer has 10 nodes to represent digits 0 through 9. Although one or more hidden layers are often utilized in designing NNs, we have used one hidden layer consisting of 48 nodes in the system for speedy computation. For performance optimization, a gradient descent method was used because it could allow a parameter update of the weights. The sum of the squared error (SSE) was applied as the gradient of loss function L to determine the difference between the predicted (\(\hat{y}_{i}\)) and actual inputs (\(y_{i}\)) by: Figure 1: Two systems are designed as (A) neural network trainer and (B) neural networks tester. The neural network trainer system consists of two layouts – digit pattern generator (left) and visual analyzer(right). The digit pattern generator supports the user in generating digit patterns and training neural networks. The visual analyzer represents user-generated digit patterns on a PCA projection space. The neural networks tester system evaluates multiple user-generated patterns with testing datasets. \[L=-\frac{1}{N}\sum_{o=0}^{N}\sum_{j=0}^{C}(y_{o,j}-\hat{y}_{o,j})^{2} \tag{1}\] where \(N\) is the length of digit samples, \(C\) is the number of classes, and \(y_{o,j}\) is an observation \(y_{o}\) with a class \(j\). To run NNs, momentum (\(\gamma\)) and learning rate (\(\eta\)) are defined to accelerate the training speed and accuracy of NNs. Momentum is a method that expedites the gradient descent by increasing the step size toward global minima. It is critical to find an optimal momentum value because a too-large value may skip global minima, or a too-small value may face local minimum issues. The learning rate controls how quickly a model adapts to the problem of training digit patterns. However, similar to tuning the momentum value, using an optimal learning rate is critical because it impacts the speed of the convergence to a solution and whether we can reach global optima. \(\Delta W_{ij}\) and \(\Delta W_{ij}^{t-1}\) represents weight changes in current and previous training iterations. They are given by: \[\Delta W_{ij}=\gamma\Delta W_{ij}^{t-1}-\eta\frac{\sigma L}{\sigma W_{ij}}, \tag{2}\] where \(\frac{\sigma L}{\sigma W_{ij}}\) denotes the partial derivative of the loss function \(L\) to decent update weights with learning rate \(\eta\) with a multiplication of -1 to move towards global minima. The values of \(\gamma\) and \(\eta\) are determined based on empirical analysis for performance optimization in training data [23]. To activate nodes in the NNs, various activation functions are available, such as Sigmoid, ReLU (Rectified Linear Unit), Tanh, or hyperbolic tangent Activation Function. ReLU is a broadly used activation function in convolutional neural networks (CNNs) or deep learning because it supports faster training [24]. However, it often causes a dying ReLU Problem [25] that decreases the ability of training data due to negative values becoming zero. Thus, the Sigmoid function, \(\sigma=\frac{1}{1+\exp^{-2}}\), is used in our system. It transforms the weighted sum of nodes to represent the probability of a value \(x\) that belongs to a certain class. Although the Sigmoid activation function requires more computation than ReLU, it supports well for training a NN model in our designed system because it consists of a single hidden layer NN. To train NNs, termination condition \(\epsilon\) is defined as \(\epsilon<0.05\), which reduces the cost function L to become below the threshold. OpenMP API [26] is used to speed up the computation of NNs using multi-processors (i.e., multi-core processors). ### Pattern Generation Figure 2: Examples of user-generated digit patterns from the 2400-digit pattern dataset. Each pattern is created using the clickable pattern grid in the Neural Network Trainer system. To generate digit patterns, the user can enable or disable each cell using a computer mouse or a touch monitor screen (if available). As mentioned earlier, the initial cell region has \(12\times 8\) size that supports creating up to \(2^{12\times 8}\) possible digit patterns. However, the overall number of patterns will be less because preprocessing generates duplicates by making each pattern fit into the cell region boundary. The applied preprocessing consists of three steps: (1) Determining the boundary of each pattern to find an object-bounding box; (2) Moving the pattern to the top-left corner; and (3) Applying scaling to make it fit the cell region. For scaling, a Nearest Neighbor Interpolation algorithm [27] is used because it requires very litter calculations. Since each digit pattern has a binary color attribute (i.e., 0 or 1), each cell is marked if the interpolation satisfies the condition \(I(x)>0.5\). To help the user understand the internal preprocessing steps, intermediate outcomes become available only if a tracking option is enabled in the system. The system supports saving user-generated digit patterns to a file and loading previously generated ones. To validate the effectiveness of the system, 2400-digit patterns are generated. Figure 2 shows samples of the 2400-digit patterns. When loading the previously generated digit patterns from a file, the system detects duplicated patterns and removes them if they exist. A handwritten digits dataset, MNIST [28], is also used to evaluate the system. It includes 70,000 grayscale images of handwritten digits with 60,000 training images and 10,000 testing images. Each digit sample is centered at a fixed-size image of \(28\times 28\) pixels. To make the images usable in our system, color conversion is applied to make them follow a binary color scheme using two-tone colors, black and white. Then, image size conversion is utilized to scale the image size down to \(12\times 8\) to make them fit into the clickable pattern grid in the designed system. Figure 3 shows examples before and after applying the image conversion. Both gray color attribute conversion and image scaling are applied by using nearest neighbor interpolation with referencing neighbor color attributes. If the interpolated value meets the condition, i.e., \(I(x)<\delta,0\leq\delta\leq 255\), the corresponding color attributes are changed to 0 when \(I(x)<\delta\) and 1 when \(I(x)\geq\delta\).We empirically determined an optimal value (\(\delta=85\)) for converting MNIST digits to gray-colored images. For convenience, we call the user-generated 2400-digit samples and the converted two-tone colored MNIST images DS-2400 and TT-MNIST, respectively, in the rest of this paper. The two datasets are used to conduct a performance evaluation of the designed system. A detailed explanation about the conducted evaluation study is included in the evaluation section. Figure 3: Conversion of the MNIST handwritten digits from the original images (28 \(\times\) 28 gray color) to two-tone colored images (12 \(\times\) 8 binary color). The converted dataset is named TT-MNIST. ### Visual Representation As mentioned above, a visualization representation is added to show all digit patterns to help users understand the difference between digits (see Figure 0(a)-right). Since each digit pattern consists of 96 cells, a dimension reduction technique, PCA is applied to project it in a PCA projection space (i.e., 2D display space). By default, the first and second principal components are used to display each digit pattern along the \(x\)- and \(y\)-axis. Figure 4: Zooming and panning user interaction techniques are supported to navigate the PCA projection space. The user performed the zooming user interaction to see the region with the digit “7” patterns (located at the bottom of the space). Figure 5: Visual representations of 940 digit patterns from the DS-2400 dataset in (a) and (b) and TT-MNIST dataset in (c) and (d). (a) and (c) show PCA projections using all digit patterns. (b) and (d) use hidden layer outputs as additional features. The system supports basic navigational user interactions (i.e., zooming and panning) to help the user can navigate freely within the 2D display space to see the relationship among the digit patterns (see Figure 4). It helps users understand the similarities between the digit patterns and the logic of recognizing them through NNs. Digit patterns maintaining similar cell outlines might appear nearby within the PCA projection space. As shown at the bottom of the visual representation, a digit 7's all patterns appear in a region. However, some digit 1's patterns appear near digit 7's patterns because they maintain similar markers, including vertical down-strokes with distinctive up-strokes on the top. Since digit 1's patterns do not always include the distinctive up-strokes, they appear in multiple regions. Similarly, different digit's patterns often appear in the same regions. The hidden layer outputs from trained NNs are used as additional features in PCA computation to create separable projections of digit patterns. Figure 5 shows examples with the DS-2400 and TT-MNIST datasets. To help understand the usefulness of using the hidden layer outputs, randomly selected 940 digit patterns from the DS-2400 dataset are used to generate the projection with and without using the hidden layer outputs as additional features (see Figures 4(a) and 4(b)). It shows the benefit of using the hidden layout outputs by forming separated clusters among different digit patterns. With the TT-MNIST dataset, we observed a clear difference in Figures 4(c) and 4(d). For instance, digit "6" patterns were observed in several locations in the PCA space (see Figure 4(c)). But, with the integration of the hidden layout outputs, the digit patterns were positioned in the same region (see the arrow in Figure 4(d)). ## 5 Evaluation of the Interactive Learning System ### Interactive Learning As discussed earlier, understanding how NNs work is not easy because of the complex nature of computing and updating its underlying structures continuously. The designed system may help users understand how it trains NNs and recognizes digit patterns. The system does not fully unveil its underlying structure of how the NN model changes its weights over time. However, we can conjecture that it supports interactive learning on NNs. More specifically, interactive learning manages three steps of the learning process: generating digit samples, training a NN model with the samples, and recognizing user-entered digits with the model. Generating digit patterns is essential to understand the effectiveness of NNs because it helps the user to identify how the NNs are trained to recognize digits. However, since it is not easy to create data, most studies have utilized existing datasets (e.g., MNIST, MS-COCO, ImageNet, Fashion-MNIST) to design new NN algorithms and evaluate their performances. To support the user in creating digit patterns interactively, we used cell-based digit pattern generation to design simplified data digit samples using a computer mouse. The system allows training NNs whenever the user generates digit pattern(s). Unlike conventional approaches using numerous data samples, our system can train NNs with a small number of digit samples (e.g., \(<10\)). For instance, if a digit sample (denoting digit "one") is applied to train NNs, the same result (resulting digit "one") will be determined. Instead, if two distinctive digit samples (e.g., "one" and "two") are used to train NNs, the system correctly recognizes their differences. For example, if the user tries to recognize a new input pattern (similar to "one" or "two" digit patterns), the system correctly recognizes it as either one or two. Figure 6 shows an example of recognizing a new digit pattern with four digit samples. Even though only four-digit patterns are used to train NNs, it correctly recognizes the new pattern with a high probability (\(0.93\)). The user can continuously add new digit patterns to improve the performance of recognizing digits interactively. This interactive digit pattern generation and recognition initiate active learning to help the user understand the logic behind NNs. For showing the probability, the probability distribution over all predicted classes is measured using a softmax function. It converts a vector of K values in the output layer to probability values. To show a normalized probability \([0,1]\) from the output value, the softmax function (\(\sigma\)) applies the exponential function. \[\sigma(\vec{x}_{i})=\frac{e^{x_{i}}}{\sum_{j=1}^{K}e^{x_{j}}} \tag{3}\] where \(\vec{x}_{i}\) indicates the values in the NN output layer, \(e^{x_{i}}\) and \(e^{x_{j}}\) denote standard exponential function for output vector, respectively. ### Performance Evaluation To support interactive learning, it is vital to maintain real-time training of NNs with the system while maintaining high accuracy. We performed an evaluation with Intel i9-9980HK Processor, 2.4 GHz, 8 cores. Figure 7 shows that training time with the DS-2400 dataset was about \(8.14\pm 1.89\) seconds. With the DS-2400 dataset, the average training time was maintained to be less than 0.5 seconds. At the same time, the training accuracy was above 97%. This indicates that the system supports users in performing real-time interactive analysis on digit recognition by training NNs. With the TT-MNIST dataset, we found that the training accuracy was maintained above 96%. The training time gradually increases as the size of the samples grows. Approximately 70 seconds have been taken to train 60,000-digit patterns. Overall, the proposed system takes less than 2 seconds to train up to 10,000 digit patterns (training with MSE \(<0.05\), resulting in a training accuracy of \(0.98\)). Since the system supports real-time training on user-generated digit patterns, we can consider the system effectively helps the user understand how NNs work to effectively recognize the digits. ## 6 Discussion As we mentioned above, the system is helpful for users to understand the logic behind NNs in recognizing digits. To understand how effective the system is in enhancing users' learning on NNs, we utilized the system during the workshop 1 for community college students. Most of the students do not have any knowledge or experience using NNs or related applications. They showed high interest in creating digit patterns and training NNs to recognize digits. Three groups were formed. They created patterns spending about 10 minutes (see Table 1). To understand the effectiveness of user-generated digit patterns, we tested the user-generated NNs using both the DS-2400 and TT-MNIST datasets. Although the testing accuracy was not high, we found that the trained NN by Group2 (using only 40 digit patterns) showed about 0.5 testing accuracy for the DS-2400 dataset. Footnote 1: NSF funded 2022 Artificial Intelligence Awareness summer workshop: [https://csit.udc.edu/mudl/](https://csit.udc.edu/mudl/) The students commented that the system was highly interactive and useful for them to understand the underlying idea of NNs. They also reported the importance of utilizing both interactive pattern generation and visualization to upgrade their knowledge of NNs. Since the evaluation through workshop participants is purely informal user testing, it is essential to conduct formal user testing to examine the system's effectiveness in enhancing the user's understanding levels. Therefore, we plan to extend our study to conduct a formal user evaluation to validate the usefulness of the developed system. Figure 6: An example of NN training with four-digit patterns (a) and recognizing a new digit with the trained NN (b). Figure 7: Training time (seconds) and accuracy with different sizes of training data with (a) and (b) the DS-2400 dataset and (c) and (d) the TT-MNIST dataset. The \(x-\)axis shows the training digit sample size. (a) and (c) represent training time (seconds) and (b) and (d) show training accuracy. Although the system is useful for advancing users' understanding of NNs through interactive learning, it has a limitation of not showing the connection weights in NNs. Although representing the weight changes does not deliver additional information about understanding the internal changes of NNs, many researchers have emphasized the effectiveness of showing them [29; 15]. Thus, designing an effective visual representation technique to show the connection weight changes in NNs is critical for advancing the users' knowledge of what information is effective in recognizing digits by NNs. It is important to note that PCA has an inherent ambiguity in the signs of resulting principal components. Thus, it generated multiple sign-flipped visual representations (see Figure 8). \begin{table} \begin{tabular}{|l|c|c|c|} \hline & Group1 & Group2 & Group3 \\ \hline Generated Patterns & 29 & 40 & 13 \\ \hline Testing Accuracy (with DS-2400) & 0.29 & 0.50 & 0.32 \\ \hline Testing Accuracy (with TT-MNIST) & 0.26 & 0.30 & 0.20 \\ \hline \end{tabular} \end{table} Table 1: Testing the NNs created by workshop participants with the two datasets (i.e., DS-2400 and TT-MNIST). Figure 8: Visual representations of \(n\) numbers of digit patterns from the DS-2400 dataset. ## 7 Conclusion and Future Work In this paper, we designed an interactive learning system to help users understand how neural networks perform digit recognition. The developed interactive learning system allows users to generate digit patterns and train them to recognize them in real time. To support real-time training and recognition, we introduced a simplified neural network with backpropagation to the system. Most importantly, we applied a visualization technique to show the difference among the digit patterns in a PCA projection space. In our experiments, we demonstrated the computational speed of training neural networks to evaluate the system's effectiveness. The key findings from this study are: (1) The developed interactive learning system took a short training time, which is critical for users to learn and understand NNs in real time; (2) The training accuracy was high (e.g., 96 \(\sim\) 97%) that validates the accuracy of the developed system as a tool to train NNs; and (3) Through our informal user testing based on the responses from community college students participated in the summer AI workshop, we received highly positive feedbacks although the number of the participants was fairly small (about ten participants). For future work, we plan to conduct formal user testing to determine the system's effectiveness in terms of how much the user can understand the principle of neural networks. Since the system has been designed as a stand-alone application, a conversion of the system to a web-based application will be performed to make it become broadly available and accessible through a web browser. We will also extend the visual representation of digit samples with different dimensional reduction techniques, such as t-distributed stochastic neighbor embedding (t-SNE) [30] or Uniform Manifold Approximation and Projection (UMAP) [31]. The complete codes and a pre-compiled executable are available at [https://github.com/drjeong/DigitPerceptron](https://github.com/drjeong/DigitPerceptron) ## Acknowledgments This material is based upon work supported by the National Science Foundation under Grant No. (2107449, 2107450, and 2107451).
2307.07512
Expressive Monotonic Neural Networks
The monotonic dependence of the outputs of a neural network on some of its inputs is a crucial inductive bias in many scenarios where domain knowledge dictates such behavior. This is especially important for interpretability and fairness considerations. In a broader context, scenarios in which monotonicity is important can be found in finance, medicine, physics, and other disciplines. It is thus desirable to build neural network architectures that implement this inductive bias provably. In this work, we propose a weight-constrained architecture with a single residual connection to achieve exact monotonic dependence in any subset of the inputs. The weight constraint scheme directly controls the Lipschitz constant of the neural network and thus provides the additional benefit of robustness. Compared to currently existing techniques used for monotonicity, our method is simpler in implementation and in theory foundations, has negligible computational overhead, is guaranteed to produce monotonic dependence, and is highly expressive. We show how the algorithm is used to train powerful, robust, and interpretable discriminators that achieve competitive performance compared to current state-of-the-art methods across various benchmarks, from social applications to the classification of the decays of subatomic particles produced at the CERN Large Hadron Collider.
Ouail Kitouni, Niklas Nolte, Michael Williams
2023-07-14T17:59:53Z
http://arxiv.org/abs/2307.07512v1
# Expressive Monotonic Neural Networks ###### Abstract The monotonic dependence of the outputs of a neural network on some of its inputs is a crucial inductive bias in many scenarios where domain knowledge dictates such behavior. This is especially important for interpretability and fairness considerations. In a broader context, scenarios in which monotonicity is important can be found in finance, medicine, physics, and other disciplines. It is thus desirable to build neural network architectures that implement this inductive bias provably. In this work, we propose a weight-constrained architecture1 with a single residual connection to achieve exact monotonic dependence in any subset of the inputs. The weight constraint scheme directly controls the Lipschitz constant of the neural network and thus provides the additional benefit of robustness. Compared to currently existing techniques used for monotonicity, our method is simpler in implementation and in theory foundations, has negligible computational overhead, is guaranteed to produce monotonic dependence, and is highly expressive. We show how the algorithm is used to train powerful, robust, and interpretable discriminators that achieve competitive performance compared to current state-of-the-art methods across various benchmarks, from social applications to the classification of the decays of subatomic particles produced at the CERN Large Hadron Collider. Footnote 1: [https://github.com/niklasnolte/MonotoneNorm](https://github.com/niklasnolte/MonotoneNorm) ## 1 Introduction The need to model functions that are monotonic in a subset of their inputs is prevalent in many ML applications. Enforcing monotonic behaviour can help improve generalization capabilities (Milani Fard et al., 2016; You et al., 2017) and assist with interpretation of the decision-making process of the neural network (Nguyen Martinez, 2019). Real world scenarios include various applications with fairness, interpretability, and security aspects. Examples can be found in the natural sciences and in many social applications. Monotonic dependence of a model output on a certain feature in the input can be informative of how an algorithm works--and in some cases is essential for real-word usage. For instance, a good recommender engine will favor the product with a high number of reviews over another with fewer but otherwise identical reviews (_ceteris paribus_). The same applies for systems that assess health risk, evaluate the likelihood of recidivism, rank applicants, filter inappropriate content, _etc_. In addition, robustness to small perturbations in the input is a desirable property for models deployed in real world applications. In particular, when they are used to inform decisions that directly affect human actors--or where the consequences of making an unexpected and unwanted decision could be extremely costly. The continued existence of adversarial methods is a good example for the possibility of malicious attacks on current algorithms (Akhtar et al., 2021). A natural way of ensuring the robustness of a model is to constrain its Lipschitz constant. To this end, we recently developed an architecture whose Lipschitz constant is constrained by design using layer-wise normalization which allows the architecture to be more expressive than the current state-of-the-art with stable and fast training (Kitouni et al., 2021). Our algorithm has been adopted to classify the decays of subatomic particles produced at the CERN Large Hadron Collider in the real-time data-processing system of the LHCb experiment, which was our original motivation for developing this novel architecture. In this paper, we present expressive monotonic Lipschitz networks. This new class of architectures employs the Lipschitz bounded networks from Kitouni et al. (2021) along with residual connections to implement monotonic dependence in any subset of the inputs by construction. It also provides exact robustness guarantees while keeping the constraints minimal such that it remains a universal approximator of Lipschitz continuous monotonic functions. We show how the algorithm is used to train powerful, robust, and interpretable discriminators that achieve competitive performance compared to current state-of-the-art methods across various benchmarks, from social applications to its original target application: the classification of the decays of subatomic particles produced at the CERN Large Hadron Collider. ## 2 Related Work Prior work in the field of monotonic models can be split into two major categories. * **Built-in and constrained monotonic architectures**: Examples of this category include Deep Lattice Networks (You et al., 2017) and networks in which all weights are constrained to have the same sign (Sill, 1998). The major drawbacks of most implementations of constrained architectures are a lack of expressiveness or poor performance due to superfluous complexity. * **Heuristic and regularized architectures (with or without certification)**: Examples of such methods include Sill & Abu-Mostafa (1996) and Gupta et al., which penalizes point-wise negative gradients on the training sample. This method works on arbitrary architectures and retains much expressive power but offers no guarantees as to the monotonicity of the trained model. Another similar method is Liu et al. (2020), which relies on Mixed Integer Linear Programming to certify the monotonicity of piece-wise linear architectures. The method uses a heuristic regularization to penalize the non-monotonicity of the model on points sampled uniformly in the domain during training. The procedure is repeated with increasing regularization strength until the model passes the certification. This iteration can be expensive and while this method is more flexible than the constrained architectures (valid for MLPs with piece-wise linear activations), the computational overhead of the certification process can be prohibitively expensive. Similarly, Sivaraman et al. (2020) propose guaranteed monotonicity for standard ReLU networks by letting a Satisfiability Modulo Theories (SMT) solver find counterexamples to the monotonicity definition and adjust the prediction in the inference process such that monotonicity is guaranteed. However, this approach requires queries to the SMT solver during inference time for each monotonic feature, and the computation time scales harshly with the number of monotonic features and the model size (see Figure 3 and 4 in Sivaraman et al. (2020)). Our architecture falls into the first category. However, we overcome both main drawbacks: lack of expressiveness and impractical complexity. Other related works appear in the context of monotonic functions for normalizing flows, where monotonicity is a key ingredient to enforce invertibility (De Cao et al., 2020; Huang et al., 2018; Behrmann et al., 2019; Wehenkel & Louppe, 2019). ## 3 Methods The goal is to develop a neural network architecture representing a vector-valued function \[f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{n},\quad d,n\in\mathbb{N}, \tag{1}\] that is provably monotonic in any subset of its inputs. We first define a few ingredients. **Definition 3.1** (Monotonicity).: Let \(\mathbf{x}\in\mathbb{R}^{d},\mathbf{x}_{\mathbb{S}}\equiv\mathbbm{1}_{\mathbb{S}} \odot\mathbf{x}\), and the Hadamard product of \(\mathbf{x}\) with the indicator vector \(\mathbbm{1}_{\mathbb{S}}(i)=1\) if \(i\in\mathbb{S}\) and \(0\) otherwise for a subset \(\mathbb{S}\subseteq\{1,\cdots,d\}\). We say that outputs \(\mathbb{Q}\subseteq\{1,\cdots,n\}\) of \(f\) are monotonically increasing in features \(\mathbb{S}\) if \[f(\mathbf{x}_{\mathbb{S}}^{\prime}+\mathbf{x}_{\mathbb{S}})_{i}\leq f(\mathbf{x}_{ \mathbb{S}}+\mathbf{x}_{\mathbb{S}})_{i}\quad\forall i\in\mathbb{Q}\text{ and }\forall\mathbf{x}_{\mathbb{S}}^{\prime}\leq\mathbf{x}_{\mathbb{S}}, \tag{2}\] where \(\mathbb{\bar{S}}\) denotes the complement of \(\mathbb{S}\) and the inequality on the right uses the product (or component-wise) order. **Definition 3.2** (Lip\({}^{p}\) function).: \(g:\mathbb{R}^{d}\to\mathbb{R}^{n}\) is Lip\({}^{p}\) if it is Lipschitz continuous with respect to the \(L^{p}\) norm in every output dimension, _i.e._, \[||g(\mathbf{x})-g(\mathbf{y})||_{\infty}\leq\lambda\|\mathbf{x}-\mathbf{y}\|_{p}\quad\forall\bm {x},\mathbf{y}\in\mathbb{R}^{n}\,. \tag{3}\] ### Lipschitz Monotonic Networks (LMN) We will henceforth and without loss of generality only consider scalar-valued functions (\(n=1\)). We start with a model \(g(\mathbf{x})\) that is Lip\({}^{1}\) with Lipschitz constant \(\lambda\). Note that the choice of \(p=1\) is crucial for decoupling the magnitudes of the directional derivatives in the monotonic features. More details on this can be found below and in Figure 1. The \(1\)-norm has the convenient side effect that we can tune the robustness requirement for each input individually. With a model \(g(\mathbf{x})\) we can define an architecture with built-in monotonicity by adding a term that has directional derivative \(\lambda\) for each coordinate in \(\mathbb{S}\): \[f(\mathbf{x})=g(\mathbf{x})+\lambda(\mathbf{1}_{\mathbb{S}}\cdot\mathbf{x})=g(\mathbf{x})+\lambda \sum_{i\in\mathbb{S}}x_{i}. \tag{4}\] This residual connection \(\lambda(\mathbf{1}_{\mathbb{S}}\cdot\mathbf{x})\) enforces monotonicity in the input subset \(\mathbf{x}_{\mathbb{S}}\): \[\frac{\partial g}{\partial x_{i}}\in[-\lambda,\lambda],\;\;\forall i \in\mathbb{N}_{1:n} \tag{5}\] \[\Rightarrow\frac{\partial f}{\partial x_{i}}=\frac{\partial g}{ \partial x_{i}}+\lambda\geq 0\;\;\forall\,\mathbf{x}\in\mathbb{R}^{n},i\in \mathbb{S}\,. \tag{6}\] The importance of the norm choiceThe construction presented here does not work with \(p\neq 1\) constraints because dependencies between the partial derivatives may be introduced, see Figure 1. The \(p=1\)-norm is the only norm that bounds the gradient within the green square and, crucially, allows the directional derivatives to be as large as \(2\lambda\) independently. When shifting the constraints by introducing the linear term, the green square allows for all possible gradient configurations, given that we can choose \(\lambda\) freely. As a counter example, the red circle, corresponding to \(p=2\) constraints, prohibits important areas in the configuration space. To be able to represent all monotonic Lip\({}^{1}\) functions with \(2\lambda\) Lipschitz constant, the construction of \(g(\mathbf{x})\) needs to be a universal approximator of Lip\({}^{1}\) functions. In the next section, we will discuss possible architectures for this task. ### Lip\({}^{p=1}\) approximators Our goal is to construct a universal approximator of Lip\({}^{1}\) functions, _i.e._, we would like the hypothesis class to have two properties: Figure 1: \(p\)-norm constrained gradients showing (red) \(p=2\) and (green) \(p=1\). The gradient of a function \(g(\mathbf{x})\) that is Lip\({}^{p=2}\) resides within the dashed red line. For a Lip\({}^{p=1}\) function, the boundary is the green dashed line. Note that \(\mathbf{x}\) is taken to be a row vector. The residual connection (in blue) effectively shifts the possible gradients to strictly positive values and thus enforces monotonicity. Note how the red solid circle does not include all possible gradient configurations. For instance, it does not allow for very small gradients in both inputs, whereas the green square includes all configurations, up to an element-wise maximum of \(2\lambda\). 1. It always satisfies Eq. 3, _i.e._, be \(\mathrm{Lip}^{1}\). 2. It is able to fit all possible \(\mathrm{Lip}^{1}\) functions. In particular, the bound in Eq. 3 needs to be attainable \(\forall\,\mathbf{x},\mathbf{y}\). Lip\({}^{1}\) constrained modelsTo satisfy the first requirement, fully connected networks can be Lipschitz bounded by constraining the matrix norm of all weight matrices (Kitouni et al., 2021; Gouk et al., 2020; Miyato et al., 2018). We recursively define the layer \(l\) of the fully connected network of depth \(D\) with activation \(\sigma\) as \[\mathbf{z}^{l}=\sigma(\mathbf{z}^{l-1})\mathbf{W}^{l}+\mathbf{b}^{l}, \tag{7}\] where \(\mathbf{z}^{0}=\mathbf{x}\) is the input and \(f(\mathbf{x})=z^{D}\) is the output of the neural network. It follows that \(g(\mathbf{x})\) satisfies Eq. 3 if \[\prod_{i=1}^{D}\|\mathbf{W}^{i}\|_{1}\leq\lambda, \tag{8}\] and \(\sigma\) has a Lipschitz constant less than or equal to 1. There are multiple ways to enforce Eq. 8. Two existing possibilities that involve scaling by the operator norm of the weight matrix (Gouk et al., 2020) are: \[\mathbf{W}^{i}\rightarrow\mathbf{W}^{\prime i}=\lambda^{1/D}\frac{\mathbf{W}^{i}}{\text{ max}(1,\|\mathbf{W}^{i}\|_{1})}\qquad\text{or}\qquad W^{i}\to W^{\prime i}= \frac{\mathbf{W}^{i}}{\text{max}(1,\lambda^{-1/D}\cdot\|\mathbf{W}^{i}\|_{1})}\,. \tag{9}\] In our studies, the latter variant seems to train slightly better. However, in some cases it might be useful to use the former to avoid the scale imbalance between the neural network's output and the residual connection used to induce monotonicity. We note that in order to satisfy Eq. 8, it is not necessary to divide the entire matrix by its 1-norm. It is sufficient to ensure that the absolute sum over each column is constrained: \[\mathbf{W}^{i}\rightarrow\mathbf{W}^{\prime i}=\mathbf{W}^{i}\text{diag}\left(\frac{1}{ \text{max}\left(1,\lambda^{-1/D}\cdot\sum_{j}|W^{i}_{jk}|\right)}\right)\,. \tag{10}\] This novel normalization scheme tends to give even better training results in practice, because the constraint is applied in each column individually. This reduces correlations of constraints, in particular, if a column saturates the bound on the norm, the other columns are not impacted. While Eq. 10 may not be suitable as a general-purpose scheme, _e.g._ it would not work in convolutional networks, its performance in training in our analysis motivates its use in fully connected architectures and further study of this approach in future work. In addition, the constraints in Eq. 9 and Eq. 10 can be applied in different ways. For example, one could normalize the weights directly before each call such that the induced gradients are propagated through the network like in Miyato et al. (2018). While one could come up with toy examples for which propagating the gradients in this way hurts training, it appears that this approach is what usually is implemented for spectral norm in PyTorch and TensorFlow (Miyato et al., 2018). Alternatively, the constraint could be applied by projecting any infeasible parameter values back into the set of feasible matrices after each gradient update as in Algorithm 2 of Gouk et al. (2020). Constraining according to Eq. 8 is not the only way to enforce \(\mathrm{Lip}^{1}\). Anil et al. (2019) provide an alternative normalization scheme: \[\|\mathbf{W}^{1}\|_{1,\infty}\cdot\prod_{i=2}^{m}\|\mathbf{W}^{i}\|_{\infty}\leq\lambda \tag{11}\] Similarly to how the 1-norm of a matrix is a column-wise maximum, the \(\infty\)-norm of a matrix is determined by the maximum 1-norm of all rows and \(\|W\|_{1,\infty}\) simply equals the maximum absolute value of an element in the matrix. Therefore, normalization schemes similar to Eq. 10, can be employed to enforce the constraints in Eq. 11 by replacing the column-wise normalization with a row- or element-wise normalization where appropriate. Preserving expressive powerGuaranteeing that the model is Lipschitz bounded is not sufficient, it must also able to saturate the bound to be able to model all possible \(\text{Lip}^{1}\) functions. Some Lipschitz network architectures, _e.g._ Miyato et al. (2018), tend to over constrain the model such that it cannot fit all \(\text{Lip}^{1}\) functions due to _gradient attenuation_. For many problems this is a rather theoretical issue. However, it becomes a practical problem for the monotonic architecture since it often works on the edges of its constraints, for instance when partial derivatives close to zero are required, see Figure 1. As a simple example, the authors of Huster et al. (2018) showed that ReLU networks are unable to fit the function \(f(x)=|x|\) if the layers are norm-constrained with \(\lambda=1\). The reason lies in fact that ReLU, and most other commonly used activations, do not have unit gradient with respect to the inputs over their entire domain. While monotonic element-wise activations like ReLU cannot have unit gradient almost everywhere without being exactly linear, the authors of Anil et al. (2019) explore activations that introduce non-linearities by reordering elements of the input vector. They propose **GroupSort** as an alternative to point-wise activations, and it is defined as follows: \[\sigma_{G}(\mathbf{x}) =\text{sort}_{1:G}(\mathbf{x}_{1:G})+\text{sort}_{G+1:2G}(\mathbf{x}_{G +1:2G})+\ldots\] \[=\sum_{i=0}^{n/G-1}\text{sort}_{iG+1:(i+1)G}(\mathbf{x}_{iG+1:(i+1)G}), \tag{12}\] where \(\mathbf{x}\in\mathbb{R}^{n}\), \(\mathbf{x}_{i:j}=\mathbf{1}_{i:j}\odot\mathbf{x}\), and \(\text{sort}_{i:j}\) orders the elements of a vector from indices \(i\) to \(j\) and leaves the other elements in place. This activation sorts an input vector in chunks (groups) of a fixed size \(G\). The GroupSort operation has a gradient of unity with respect to every input, giving architectures constrained with Eq. 8 greatly increased expressive power. In fact, Anil et al. (2019) prove that GroupSort networks with the normalization scheme in Eq. 11 are universal approximators of \(\text{Lip}^{1}\) functions. Therefore, these networks fulfill the two requirements outlined in the beginning of this section. For universal approximation to be possible, the activation function used needs to be gradient norm preserving (GNP), _i.e._, have gradient 1 almost everywhere. Householder activations are another instance of GNP activations of which GroupSort-2 is a special case (Singla et al., 2021). The Householder activation is defined as follows: \[\sigma(\mathbf{z})=\begin{cases}\mathbf{z}&\mathbf{zv}>0\\ \mathbf{z}(\mathbf{1}-2\mathbf{v}\mathbf{v}^{T})&\mathbf{zv}\leq 0\end{cases} \tag{13}\] Here, \(\mathbf{z}\) is the preactivation row vector and \(\mathbf{v}\) is any column unit vector. Householder Lipschitz Networks naturally inherit the universal approximation property. In summary, we have constructed a neural network architecture \(f(\mathbf{x})\) via Eq. 4 that can provably approximate all monotonic Lipschitz bounded functions. The Lipschitz constant of the model can be increased arbitrarily by controlling the parameter \(\lambda\) in our construction. ## 4 Experiments "Beware of bugs in the above code, I have only proved it correct, not tried it" (Knuth). In the spirit of Donald Knuth, in this section we test our algorithm on many different domains to show that it works well in practice and gives competitive results, as should be expected from a universal approximator. ### Toy Example Figure 2 shows a toy example where both a monotonic and an unconstrained network are trained to regress on a noisy one-dimensional dataset. The true underlying model used here is monotonic, though an added heteroskedastic Gaussian noise term can obscure this in any realization. As can be seen in Figure 2, no matter how the data are distributed at the edge of the support, the monotonic Lipschitz network is always non-decreasing outside of the support as guaranteed by our architecture. Such out-of-distribution guarantees can be extremely valuable in cases where domain knowledge dictates monotonic behavior is either required or desirable. ### Real-Time Decision-Making at 40 MHz at the LHC Because many physical systems are modeled with well-known theoretical frameworks that dictate the properties of the system, monotonicity can be a crucial inductive bias in the physical sciences. For instance, modeling enthalpy, a thermodynamic quantity measuring the total heat content of a system, in a simulator requires a monotonic function of temperature for fixed pressure (as is known from basic physical principles). In this section, we describe a real-world physics application which requires monotonicity in certain features--and robustness in all of them. The algorithm described here has, in fact, been implemented by a high-energy particle physics experiment at the European Center for Nuclear Research (CERN), and is actively being used to collect data at the Large Hadron Collider (LHC) in 2022, where high-energy proton-proton collisions occur at 40 MHz. The sensor arrays of the LHC experiments produce data at a rate of over 100 TB/s. Drastic data-reduction is performed by custom-built read-out electronics; however, the annual data volumes are still \(O(100)\) exabytes, which cannot be put into permanent storage. Therefore, each LHC experiment processes its data in real time, deciding which proton-proton collision events should be kept and which should be discarded permanently; this is referred to as _triggering_ in particle physics. To be suitable for use in trigger systems, classification algorithms must be robust against the impact of experimental instabilities that occur during data taking--and deficiencies in simulated training samples. Our training samples cannot possibly account for the unknown new physics that we hope to learn by performing the experiments! A ubiquitous inductive bias at the LHC is that outlier collision events are more interesting, since we are looking for physics that has never been observed before. However, uninteresting outliers are frequently caused by experimental imperfections, many of which are included and labeled as background in training. Conversely, it is not possible to include the set of all possible interesting outliers _a priori_ in the training. A solution to this problem is to implement _outliers are better_ directly using our expressive monotonic Lipschitz architecture from Section 3. Our architecture was originally developed for the task of classifying the decays of heavy-flavor particles produced at the LHC. These are bound states containing a beauty or charm quark that travel an observable distance \(\mathcal{O}(1\,\mathrm{cm})\) before decaying due to their (relatively) long lifetimes. This example uses a dataset of simulated proton-proton (\(pp\)) collisions in the LHCb detector. Charged particles recorded by LHCb are combined pairwise into decay-vertex (DV) candidates. The task concerns discriminating DV candidates corresponding to heavy-flavor decays from all other sources. Heavy-flavor DVs typically have substantial separation from the \(pp\) collision point, due to the relatively long heavy-flavor particle lifetimes, and large transverse momenta, \(p_{\mathrm{T}}\), of the component particles, due to the large heavy-flavor particle masses. The main sources of background DVs, described in Kitouni et al. (2021), mostly have small displacement and small \(p_{\mathrm{T}}\), though unfortunately they can also have extremely large values of both displacement and momentum. Figure 2: Our monotonic architecture (green) and an unconstrained network (red) trained on two realizations (purple data points) of a one dimensional dataset. The shaded regions are where training data were absent. Each model is trained using 10 random initialization seeds. The dark lines are averages over the seeds, which are each shown as light lines. The unconstrained models exhibit overfitting of the noise, non-monotonic behavior, and highly undesirable and unpredictable results when extrapolating beyond the region occupied by the training data. Conversely, the monotonic Lipschitz models are always monotonic, even in scenarios where the noise is strongly suggestive of non-monotonic behavior. In addition, the Lipschitz constraint produces much smoother models. Figure 3 shows a simplified version of this problem using only the two most-powerful inputs. Our inductive bias requires a monotonic increasing response in both features (detailed discussion motivating this bias can be found in Kitouni et al. (2021)). We see that an unconstrained neural network rejects DVs with increasing larger displacements (lower right corner), and that this leads to a decrease of the signal efficiency (true positive rate) for large lifetimes. The unconstrained model violates our inductive bias. Figures 3 and 4 show that a monotonic BDT (Auguste et al., 2020) approach works here. However, the jagged decision boundary can cause problems in subsequent analysis of the data. Figure 3 also shows that our novel approach from Section 3 successfully produces a smooth and monotonic response, and Figure 4 shows that this provides the monotonic lifetime dependence we desire in the efficiency. In addition, we note that the added benefit of guaranteed Lipschitz robustness is a major advantage for many real world applications. Specifically for particle physicists, this kind of robustness directly translates to important guarantees when considering experimental instabilities. Due to the simplicity and practicality of our method, the LHCb experiment is now using the proposed architecture for real-time data selection at a data rate of about \(40\,\)Tbit/s. Figure 4: From Kitouni et al. (2021): True positive rate (efficiency) of each model shown in Figure 3 versus the proper lifetime of the decaying heavy-quark particle selected. The monotonic models produce a nearly uniform efficiency above a few picoseconds at the expense of a few percent lifetime-integrated efficiency. Such a trade off is desirable as explained in the text. Figure 3: From Kitouni et al. (2021): Simplified version of the heavy-quark selection problem using only two inputs, which permits displaying the response everywhere in the feature space; shown here as a heat map with more signal-like (background-like) regions colored blue (red). The dark solid line shows the decision boundary (upper right regions are selected). Shown are (left) a standard fully connected neural network, (middle) a monotonic BDT, and (right) our architecture. The quantities shown on the horizontal and vertical axes are related to how long the particle lived before decaying and how massive the particle was, respectively. ### Public datasets with monotonic dependence In this section, we follow as closely as possible the experiments done in Liu et al. (2020), and some experiments done in Sivaraman et al. (2020) to be able to directly compare to state-of-the-art monotonic architectures. Liu et al. (2020) studied monotonic architectures on four different datasets: COMPAS (Larson and Kirchner, 2016), BlogFeedback (Buza, 2014), LoanDefaulter (Kaggle, 2015), and ChestXRay (Wang et al., 2017). From Sivaraman et al. (2020) we compare against one regression and one classification task: AutoMPG (Dua and Graff, 2017) and HeartDisease (Gennari et al., 1989). Results are shown in Table 1. an augmented version (i.e. with an additional monotonic feature added artificially) of CIFAR100 in Appendix A. ## 5 Limitations We are working on improving the architecture as follows: First, common initialization techniques are not optimal for weight-normed networks (Arpit et al., 2019). Simple modifications to the weight initialization might aid convergence, especially for large Lipschitz parameters. Secondly, we are currently constrained to activation functions that have a gradient norm of \(1\) over their entire domain, such as **GroupSort**, to ensure universal approximation, see Anil et al. (2019). We will explore other options in the future. Lastly, there is not yet a proof for universal approximation for the architecture described in Eq. 8. However, it appears from empirical investigation that the networks do approximate universally, as we have yet to find a function that could not be approximated well enough with a deep enough network. We do not consider this a major drawback, as the construction in Eq. 11 does approximate universally, see Anil et al. (2019). Note that none of these limitations have any visible impact on the performance of the experiments in Section 4. ## 6 Conclusion and Future Work We presented an architecture that provably approximates Lipschitz continuous and partially monotonic functions. Monotonic dependence is enforced via an end-to-end residual connection to a minimally Lip\({}^{1}\) constrained fully connected neural network. This method is simple to implement, has negligible computational overhead, and gives stronger guarantees than regularized models. Our architecture achieves competitive results with respect to current state-of-the-art monotonic architectures, even when using a tiny number of parameters, and has the additional benefit of guaranteed robustness due to its known Lipschitz constant. For future directions of this line of research, we plan to tackle the problems outlined in the limitation section, especially improving initialization of weight-normed networks. \begin{table} \begin{tabular}{c|c c} \multicolumn{3}{c}{**COMPAS**} \\ \hline Method & Parameters & \(\left\uparrow\right.\) Test Acc \\ \hline Certified & 23112 & \((68.8\pm 0.2)\%\) \\ **LMN** & **37** & \((\mathbf{69.3\pm 0.1})\%\) \\ \multicolumn{3}{c}{**LoanDefaulter**} \\ \hline Method & Parameters & \(\left\uparrow\right.\) Test Acc \\ \hline Certified & 8502 & \((65.2\pm 0.1)\%\) \\ **LMN** & **753** & \((\mathbf{65.44\pm 0.03})\%\) \\ **LMN mini** & **69** & \((\mathbf{65.28\pm 0.01})\%\) \\ \multicolumn{3}{c}{**Heart Disease**} \\ \hline Method & \(\left\uparrow\right.\) Test Acc \\ \hline COMET & \((86\pm 3)\%\) \\ **LMN** & \((89.6\pm 1.9)\%\) \\ \end{tabular} \begin{tabular}{c|c c} \multicolumn{3}{c}{**BlogFeedback**} \\ \hline Method & Parameters & \(\left\downarrow\right.\) RMSE \\ \hline Certified & 8492 & \(.158\pm.001\) \\ **LMN** & **2225** & \(\mathbf{.160\pm.001}\) \\ **LMN mini** & **177** & \(\mathbf{.155\pm.001}\) \\ \multicolumn{3}{c}{**ChestXRay**} \\ \hline Method & Parameters & \(\left\uparrow\right.\) Test Acc \\ \hline Certified & 12792 & \((62.3\pm 0.2)\%\) \\ Certified E-E & 12792 & \((66.3\pm 1.0)\%\) \\ **LMN** & **1043** & \((\mathbf{67.6\pm 0.6})\%\) \\ **LMN E-E** & **1043** & \((\mathbf{70.0\pm 1.4})\%\) \\ \multicolumn{3}{c}{**Auto MPG**} \\ \hline Method & \(\left\downarrow\right.\) MSE \\ \hline COMET & \((8.81\pm 1.81)\%\) \\ **LMN** & \((\mathbf{7.58\pm 1.2})\%\) \\ \end{tabular} \begin{tabular}{c|c c} \multicolumn{3}{c}{**Auto MPG**} \\ \hline Method & \(\left\downarrow\right.\) MSE \\ \hline COMET & \((8.81\pm 1.81)\%\) \\ **LMN** & \((\mathbf{7.58\pm 1.2})\%\) \\ \end{tabular} \end{table} Table 1: We compare our method (in bold) against state-of-the-art monotonic models (we only show the best) on a variety of benchmarks. The performance numbers for other techniques were taken from Liu et al. (2020) and Sivaraman et al. (2020). In the ChestXRay experiment, we train one model with frozen ResNet18 weights (second to last) and another with end-to-end training (last). While our models can generally get quite small, we can achieve even smaller models when only taking a subset of all the features. These models are denoted with “mini”. ## 7 Reproducibility Statement All experiments with public datasets are reproducible with the code provided at [https://github.com/niklasnolte/monotonic_tests](https://github.com/niklasnolte/monotonic_tests). This code uses the package available in [https://github.com/niklasnolte/MonotoneNorm](https://github.com/niklasnolte/MonotoneNorm), which is meant to be a standalone pytorch implementation of Lipschitz Monotonic Networks. The experiments in Section 4.2 were made with data that is not publicly available. The code to reproduce those experiments can be found under [https://github.com/niklasnolte/HLT_2Track](https://github.com/niklasnolte/HLT_2Track) and the data will be made available in later years at the discretion of the LHCb collaboration. #### Acknowledgments This work was supported by NSF grant PHY-2019786 (The NSF AI Institute for Artificial Intelligence and Fundamental Interactions, [http://iaifi.org/](http://iaifi.org/)).